2025-07-12 19:41:13.991183 | Job console starting 2025-07-12 19:41:14.005083 | Updating git repos 2025-07-12 19:41:14.083524 | Cloning repos into workspace 2025-07-12 19:41:14.298987 | Restoring repo states 2025-07-12 19:41:14.318780 | Merging changes 2025-07-12 19:41:14.865865 | Checking out repos 2025-07-12 19:41:15.103461 | Preparing playbooks 2025-07-12 19:41:15.797001 | Running Ansible setup 2025-07-12 19:41:20.076419 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-07-12 19:41:20.835017 | 2025-07-12 19:41:20.835181 | PLAY [Base pre] 2025-07-12 19:41:20.852250 | 2025-07-12 19:41:20.852376 | TASK [Setup log path fact] 2025-07-12 19:41:20.882330 | orchestrator | ok 2025-07-12 19:41:20.899558 | 2025-07-12 19:41:20.899725 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-12 19:41:20.941995 | orchestrator | ok 2025-07-12 19:41:20.955474 | 2025-07-12 19:41:20.955590 | TASK [emit-job-header : Print job information] 2025-07-12 19:41:21.009660 | # Job Information 2025-07-12 19:41:21.009914 | Ansible Version: 2.16.14 2025-07-12 19:41:21.010017 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-07-12 19:41:21.010072 | Pipeline: label 2025-07-12 19:41:21.010110 | Executor: 521e9411259a 2025-07-12 19:41:21.010142 | Triggered by: https://github.com/osism/testbed/pull/2743 2025-07-12 19:41:21.010177 | Event ID: d442eca0-5f55-11f0-9760-87cc9d1fa185 2025-07-12 19:41:21.018962 | 2025-07-12 19:41:21.019111 | LOOP [emit-job-header : Print node information] 2025-07-12 19:41:21.162236 | orchestrator | ok: 2025-07-12 19:41:21.162577 | orchestrator | # Node Information 2025-07-12 19:41:21.162640 | orchestrator | Inventory Hostname: orchestrator 2025-07-12 19:41:21.162686 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-07-12 19:41:21.162725 | orchestrator | Username: zuul-testbed01 2025-07-12 19:41:21.162763 | orchestrator | Distro: Debian 12.11 2025-07-12 19:41:21.162805 | orchestrator | Provider: static-testbed 2025-07-12 19:41:21.162893 | orchestrator | Region: 2025-07-12 19:41:21.162971 | orchestrator | Label: testbed-orchestrator 2025-07-12 19:41:21.163026 | orchestrator | Product Name: OpenStack Nova 2025-07-12 19:41:21.163062 | orchestrator | Interface IP: 81.163.193.140 2025-07-12 19:41:21.191275 | 2025-07-12 19:41:21.191435 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-07-12 19:41:21.672913 | orchestrator -> localhost | changed 2025-07-12 19:41:21.685638 | 2025-07-12 19:41:21.685781 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-07-12 19:41:22.770099 | orchestrator -> localhost | changed 2025-07-12 19:41:22.792270 | 2025-07-12 19:41:22.792463 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-07-12 19:41:23.070039 | orchestrator -> localhost | ok 2025-07-12 19:41:23.086795 | 2025-07-12 19:41:23.087059 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-07-12 19:41:23.125827 | orchestrator | ok 2025-07-12 19:41:23.146130 | orchestrator | included: /var/lib/zuul/builds/2fd9ca158f1c4f53bff4bdb765da3c0a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-07-12 19:41:23.154320 | 2025-07-12 19:41:23.154425 | TASK [add-build-sshkey : Create Temp SSH key] 2025-07-12 19:41:24.842341 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-07-12 19:41:24.842827 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/2fd9ca158f1c4f53bff4bdb765da3c0a/work/2fd9ca158f1c4f53bff4bdb765da3c0a_id_rsa 2025-07-12 19:41:24.843068 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/2fd9ca158f1c4f53bff4bdb765da3c0a/work/2fd9ca158f1c4f53bff4bdb765da3c0a_id_rsa.pub 2025-07-12 19:41:24.843149 | orchestrator -> localhost | The key fingerprint is: 2025-07-12 19:41:24.843221 | orchestrator -> localhost | SHA256:+NkW9Va77Boa29lUubL9ZRZ1pfzdzYWZ/Bo/HuZPRWA zuul-build-sshkey 2025-07-12 19:41:24.843286 | orchestrator -> localhost | The key's randomart image is: 2025-07-12 19:41:24.843371 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-07-12 19:41:24.843437 | orchestrator -> localhost | | E .| 2025-07-12 19:41:24.843498 | orchestrator -> localhost | | + *.| 2025-07-12 19:41:24.843555 | orchestrator -> localhost | | . B *| 2025-07-12 19:41:24.843614 | orchestrator -> localhost | | . . . *X| 2025-07-12 19:41:24.843673 | orchestrator -> localhost | | . S . ++X| 2025-07-12 19:41:24.843741 | orchestrator -> localhost | | . o . ..+*| 2025-07-12 19:41:24.843802 | orchestrator -> localhost | | o o. o.O*| 2025-07-12 19:41:24.843860 | orchestrator -> localhost | | . = &++| 2025-07-12 19:41:24.843920 | orchestrator -> localhost | | o =.*=| 2025-07-12 19:41:24.843996 | orchestrator -> localhost | +----[SHA256]-----+ 2025-07-12 19:41:24.844127 | orchestrator -> localhost | ok: Runtime: 0:00:01.188136 2025-07-12 19:41:24.858433 | 2025-07-12 19:41:24.858566 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-07-12 19:41:24.893636 | orchestrator | ok 2025-07-12 19:41:24.905819 | orchestrator | included: /var/lib/zuul/builds/2fd9ca158f1c4f53bff4bdb765da3c0a/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-07-12 19:41:24.914638 | 2025-07-12 19:41:24.914722 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-07-12 19:41:24.937462 | orchestrator | skipping: Conditional result was False 2025-07-12 19:41:24.945034 | 2025-07-12 19:41:24.945119 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-07-12 19:41:25.500567 | orchestrator | changed 2025-07-12 19:41:25.509453 | 2025-07-12 19:41:25.509596 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-07-12 19:41:25.799646 | orchestrator | ok 2025-07-12 19:41:25.810120 | 2025-07-12 19:41:25.810245 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-07-12 19:41:26.234441 | orchestrator | ok 2025-07-12 19:41:26.242768 | 2025-07-12 19:41:26.242887 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-07-12 19:41:26.666371 | orchestrator | ok 2025-07-12 19:41:26.672512 | 2025-07-12 19:41:26.672599 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-07-12 19:41:26.695343 | orchestrator | skipping: Conditional result was False 2025-07-12 19:41:26.702885 | 2025-07-12 19:41:26.702986 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-07-12 19:41:27.101801 | orchestrator -> localhost | changed 2025-07-12 19:41:27.126685 | 2025-07-12 19:41:27.126804 | TASK [add-build-sshkey : Add back temp key] 2025-07-12 19:41:27.434821 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/2fd9ca158f1c4f53bff4bdb765da3c0a/work/2fd9ca158f1c4f53bff4bdb765da3c0a_id_rsa (zuul-build-sshkey) 2025-07-12 19:41:27.435447 | orchestrator -> localhost | ok: Runtime: 0:00:00.019362 2025-07-12 19:41:27.452042 | 2025-07-12 19:41:27.452206 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-07-12 19:41:27.835452 | orchestrator | ok 2025-07-12 19:41:27.844642 | 2025-07-12 19:41:27.844778 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-07-12 19:41:27.878631 | orchestrator | skipping: Conditional result was False 2025-07-12 19:41:27.927132 | 2025-07-12 19:41:27.927238 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-07-12 19:41:28.318631 | orchestrator | ok 2025-07-12 19:41:28.333876 | 2025-07-12 19:41:28.334030 | TASK [validate-host : Define zuul_info_dir fact] 2025-07-12 19:41:28.374003 | orchestrator | ok 2025-07-12 19:41:28.386046 | 2025-07-12 19:41:28.386202 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-07-12 19:41:28.705728 | orchestrator -> localhost | ok 2025-07-12 19:41:28.723722 | 2025-07-12 19:41:28.723902 | TASK [validate-host : Collect information about the host] 2025-07-12 19:41:29.995140 | orchestrator | ok 2025-07-12 19:41:30.019362 | 2025-07-12 19:41:30.019557 | TASK [validate-host : Sanitize hostname] 2025-07-12 19:41:30.098182 | orchestrator | ok 2025-07-12 19:41:30.107589 | 2025-07-12 19:41:30.107744 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-07-12 19:41:30.712563 | orchestrator -> localhost | changed 2025-07-12 19:41:30.726897 | 2025-07-12 19:41:30.727092 | TASK [validate-host : Collect information about zuul worker] 2025-07-12 19:41:31.199679 | orchestrator | ok 2025-07-12 19:41:31.207399 | 2025-07-12 19:41:31.207543 | TASK [validate-host : Write out all zuul information for each host] 2025-07-12 19:41:31.778575 | orchestrator -> localhost | changed 2025-07-12 19:41:31.789596 | 2025-07-12 19:41:31.789722 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-07-12 19:41:32.098288 | orchestrator | ok 2025-07-12 19:41:32.106597 | 2025-07-12 19:41:32.106729 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-07-12 19:41:50.688692 | orchestrator | changed: 2025-07-12 19:41:50.688956 | orchestrator | .d..t...... src/ 2025-07-12 19:41:50.689025 | orchestrator | .d..t...... src/github.com/ 2025-07-12 19:41:50.689076 | orchestrator | .d..t...... src/github.com/osism/ 2025-07-12 19:41:50.689117 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-07-12 19:41:50.689155 | orchestrator | RedHat.yml 2025-07-12 19:41:50.702520 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-07-12 19:41:50.702538 | orchestrator | RedHat.yml 2025-07-12 19:41:50.702591 | orchestrator | = 2.2.0"... 2025-07-12 19:42:03.476859 | orchestrator | 19:42:03.476 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-07-12 19:42:03.505647 | orchestrator | 19:42:03.505 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-07-12 19:42:04.768480 | orchestrator | 19:42:04.768 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.0... 2025-07-12 19:42:06.118078 | orchestrator | 19:42:06.117 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.0 (signed, key ID 4F80527A391BEFD2) 2025-07-12 19:42:07.308278 | orchestrator | 19:42:07.308 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-07-12 19:42:08.288623 | orchestrator | 19:42:08.288 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-07-12 19:42:08.792415 | orchestrator | 19:42:08.792 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-07-12 19:42:09.649836 | orchestrator | 19:42:09.649 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-07-12 19:42:09.649939 | orchestrator | 19:42:09.649 STDOUT terraform: Providers are signed by their developers. 2025-07-12 19:42:09.649978 | orchestrator | 19:42:09.649 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-07-12 19:42:09.650001 | orchestrator | 19:42:09.649 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-07-12 19:42:09.650054 | orchestrator | 19:42:09.649 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-07-12 19:42:09.650079 | orchestrator | 19:42:09.649 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-07-12 19:42:09.650101 | orchestrator | 19:42:09.649 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-07-12 19:42:09.650113 | orchestrator | 19:42:09.649 STDOUT terraform: you run "tofu init" in the future. 2025-07-12 19:42:09.650126 | orchestrator | 19:42:09.650 STDOUT terraform: OpenTofu has been successfully initialized! 2025-07-12 19:42:09.650141 | orchestrator | 19:42:09.650 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-07-12 19:42:09.650156 | orchestrator | 19:42:09.650 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-07-12 19:42:09.650171 | orchestrator | 19:42:09.650 STDOUT terraform: should now work. 2025-07-12 19:42:09.653232 | orchestrator | 19:42:09.650 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-07-12 19:42:09.653296 | orchestrator | 19:42:09.650 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-07-12 19:42:09.653311 | orchestrator | 19:42:09.650 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-07-12 19:42:09.761740 | orchestrator | 19:42:09.761 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-12 19:42:09.761871 | orchestrator | 19:42:09.761 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-07-12 19:42:09.980858 | orchestrator | 19:42:09.980 STDOUT terraform: Created and switched to workspace "ci"! 2025-07-12 19:42:09.981050 | orchestrator | 19:42:09.980 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-07-12 19:42:09.981083 | orchestrator | 19:42:09.980 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-07-12 19:42:09.981097 | orchestrator | 19:42:09.980 STDOUT terraform: for this configuration. 2025-07-12 19:42:10.158318 | orchestrator | 19:42:10.158 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-12 19:42:10.158393 | orchestrator | 19:42:10.158 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-07-12 19:42:10.257228 | orchestrator | 19:42:10.257 STDOUT terraform: ci.auto.tfvars 2025-07-12 19:42:10.269109 | orchestrator | 19:42:10.268 STDOUT terraform: default_custom.tf 2025-07-12 19:42:10.413402 | orchestrator | 19:42:10.407 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-07-12 19:42:11.557881 | orchestrator | 19:42:11.557 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-07-12 19:42:12.222232 | orchestrator | 19:42:12.219 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-07-12 19:42:12.463015 | orchestrator | 19:42:12.462 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-07-12 19:42:12.463117 | orchestrator | 19:42:12.462 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-07-12 19:42:12.463133 | orchestrator | 19:42:12.462 STDOUT terraform:  + create 2025-07-12 19:42:12.463146 | orchestrator | 19:42:12.462 STDOUT terraform:  <= read (data resources) 2025-07-12 19:42:12.463161 | orchestrator | 19:42:12.463 STDOUT terraform: OpenTofu will perform the following actions: 2025-07-12 19:42:12.463456 | orchestrator | 19:42:12.463 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-07-12 19:42:12.463500 | orchestrator | 19:42:12.463 STDOUT terraform:  # (config refers to values not yet known) 2025-07-12 19:42:12.463512 | orchestrator | 19:42:12.463 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-07-12 19:42:12.463526 | orchestrator | 19:42:12.463 STDOUT terraform:  + checksum = (known after apply) 2025-07-12 19:42:12.463575 | orchestrator | 19:42:12.463 STDOUT terraform:  + created_at = (known after apply) 2025-07-12 19:42:12.463592 | orchestrator | 19:42:12.463 STDOUT terraform:  + file = (known after apply) 2025-07-12 19:42:12.463812 | orchestrator | 19:42:12.463 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.463832 | orchestrator | 19:42:12.463 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.463870 | orchestrator | 19:42:12.463 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-12 19:42:12.463881 | orchestrator | 19:42:12.463 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-12 19:42:12.463892 | orchestrator | 19:42:12.463 STDOUT terraform:  + most_recent = true 2025-07-12 19:42:12.463903 | orchestrator | 19:42:12.463 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:42:12.463914 | orchestrator | 19:42:12.463 STDOUT terraform:  + protected = (known after apply) 2025-07-12 19:42:12.463929 | orchestrator | 19:42:12.463 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.463941 | orchestrator | 19:42:12.463 STDOUT terraform:  + schema = (known after apply) 2025-07-12 19:42:12.463952 | orchestrator | 19:42:12.463 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-12 19:42:12.463963 | orchestrator | 19:42:12.463 STDOUT terraform:  + tags = (known after apply) 2025-07-12 19:42:12.463974 | orchestrator | 19:42:12.463 STDOUT terraform:  + updated_at = (known after apply) 2025-07-12 19:42:12.463985 | orchestrator | 19:42:12.463 STDOUT terraform:  } 2025-07-12 19:42:12.464271 | orchestrator | 19:42:12.464 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-07-12 19:42:12.464314 | orchestrator | 19:42:12.464 STDOUT terraform:  # (config refers to values not yet known) 2025-07-12 19:42:12.464326 | orchestrator | 19:42:12.464 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-07-12 19:42:12.464340 | orchestrator | 19:42:12.464 STDOUT terraform:  + checksum = (known after apply) 2025-07-12 19:42:12.464382 | orchestrator | 19:42:12.464 STDOUT terraform:  + created_at = (known after apply) 2025-07-12 19:42:12.464411 | orchestrator | 19:42:12.464 STDOUT terraform:  + file = (known after apply) 2025-07-12 19:42:12.464426 | orchestrator | 19:42:12.464 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.464460 | orchestrator | 19:42:12.464 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.464474 | orchestrator | 19:42:12.464 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-07-12 19:42:12.464515 | orchestrator | 19:42:12.464 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-07-12 19:42:12.464528 | orchestrator | 19:42:12.464 STDOUT terraform:  + most_recent = true 2025-07-12 19:42:12.464542 | orchestrator | 19:42:12.464 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:42:12.464599 | orchestrator | 19:42:12.464 STDOUT terraform:  + protected = (known after apply) 2025-07-12 19:42:12.464612 | orchestrator | 19:42:12.464 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.464627 | orchestrator | 19:42:12.464 STDOUT terraform:  + schema = (known after apply) 2025-07-12 19:42:12.464641 | orchestrator | 19:42:12.464 STDOUT terraform:  + size_bytes = (known after apply) 2025-07-12 19:42:12.464682 | orchestrator | 19:42:12.464 STDOUT terraform:  + tags = (known after apply) 2025-07-12 19:42:12.464714 | orchestrator | 19:42:12.464 STDOUT terraform:  + updated_at = (known after apply) 2025-07-12 19:42:12.464749 | orchestrator | 19:42:12.464 STDOUT terraform:  } 2025-07-12 19:42:12.465227 | orchestrator | 19:42:12.465 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-07-12 19:42:12.465296 | orchestrator | 19:42:12.465 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-07-12 19:42:12.465308 | orchestrator | 19:42:12.465 STDOUT terraform:  + content = (known after apply) 2025-07-12 19:42:12.465323 | orchestrator | 19:42:12.465 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 19:42:12.465337 | orchestrator | 19:42:12.465 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 19:42:12.465379 | orchestrator | 19:42:12.465 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 19:42:12.465417 | orchestrator | 19:42:12.465 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 19:42:12.465448 | orchestrator | 19:42:12.465 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 19:42:12.465485 | orchestrator | 19:42:12.465 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 19:42:12.465499 | orchestrator | 19:42:12.465 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 19:42:12.465526 | orchestrator | 19:42:12.465 STDOUT terraform:  + file_permission = "0644" 2025-07-12 19:42:12.465556 | orchestrator | 19:42:12.465 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-07-12 19:42:12.465595 | orchestrator | 19:42:12.465 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.465610 | orchestrator | 19:42:12.465 STDOUT terraform:  } 2025-07-12 19:42:12.465901 | orchestrator | 19:42:12.465 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-07-12 19:42:12.465938 | orchestrator | 19:42:12.465 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-07-12 19:42:12.465951 | orchestrator | 19:42:12.465 STDOUT terraform:  + content = (known after apply) 2025-07-12 19:42:12.465988 | orchestrator | 19:42:12.465 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 19:42:12.466164 | orchestrator | 19:42:12.465 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 19:42:12.466183 | orchestrator | 19:42:12.466 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 19:42:12.466193 | orchestrator | 19:42:12.466 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 19:42:12.466202 | orchestrator | 19:42:12.466 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 19:42:12.466223 | orchestrator | 19:42:12.466 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 19:42:12.466238 | orchestrator | 19:42:12.466 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 19:42:12.466248 | orchestrator | 19:42:12.466 STDOUT terraform:  + file_permission = "0644" 2025-07-12 19:42:12.466257 | orchestrator | 19:42:12.466 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-07-12 19:42:12.466270 | orchestrator | 19:42:12.466 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.466280 | orchestrator | 19:42:12.466 STDOUT terraform:  } 2025-07-12 19:42:12.466479 | orchestrator | 19:42:12.466 STDOUT terraform:  # local_file.inventory will be created 2025-07-12 19:42:12.466498 | orchestrator | 19:42:12.466 STDOUT terraform:  + resource "local_file" "inventory" { 2025-07-12 19:42:12.466530 | orchestrator | 19:42:12.466 STDOUT terraform:  + content = (known after apply) 2025-07-12 19:42:12.466554 | orchestrator | 19:42:12.466 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 19:42:12.466603 | orchestrator | 19:42:12.466 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 19:42:12.466662 | orchestrator | 19:42:12.466 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 19:42:12.466677 | orchestrator | 19:42:12.466 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 19:42:12.466718 | orchestrator | 19:42:12.466 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 19:42:12.466754 | orchestrator | 19:42:12.466 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 19:42:12.466809 | orchestrator | 19:42:12.466 STDOUT terraform:  + directory_permission = "0777" 2025-07-12 19:42:12.466820 | orchestrator | 19:42:12.466 STDOUT terraform:  + file_permission = "0644" 2025-07-12 19:42:12.466833 | orchestrator | 19:42:12.466 STDOUT terraform:  + filename = "inventory.ci" 2025-07-12 19:42:12.466875 | orchestrator | 19:42:12.466 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.466890 | orchestrator | 19:42:12.466 STDOUT terraform:  } 2025-07-12 19:42:12.467090 | orchestrator | 19:42:12.467 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-07-12 19:42:12.467109 | orchestrator | 19:42:12.467 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-07-12 19:42:12.467142 | orchestrator | 19:42:12.467 STDOUT terraform:  + content = (sensitive value) 2025-07-12 19:42:12.467174 | orchestrator | 19:42:12.467 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-07-12 19:42:12.467331 | orchestrator | 19:42:12.467 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-07-12 19:42:12.467349 | orchestrator | 19:42:12.467 STDOUT terraform:  + content_md5 = (known after apply) 2025-07-12 19:42:12.467358 | orchestrator | 19:42:12.467 STDOUT terraform:  + content_sha1 = (known after apply) 2025-07-12 19:42:12.467368 | orchestrator | 19:42:12.467 STDOUT terraform:  + content_sha256 = (known after apply) 2025-07-12 19:42:12.467378 | orchestrator | 19:42:12.467 STDOUT terraform:  + content_sha512 = (known after apply) 2025-07-12 19:42:12.467410 | orchestrator | 19:42:12.467 STDOUT terraform:  + directory_permission = "0700" 2025-07-12 19:42:12.467420 | orchestrator | 19:42:12.467 STDOUT terraform:  + file_permission = "0600" 2025-07-12 19:42:12.467430 | orchestrator | 19:42:12.467 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-07-12 19:42:12.467452 | orchestrator | 19:42:12.467 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.467463 | orchestrator | 19:42:12.467 STDOUT terraform:  } 2025-07-12 19:42:12.467475 | orchestrator | 19:42:12.467 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-07-12 19:42:12.467488 | orchestrator | 19:42:12.467 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-07-12 19:42:12.467500 | orchestrator | 19:42:12.467 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.467513 | orchestrator | 19:42:12.467 STDOUT terraform:  } 2025-07-12 19:42:12.467799 | orchestrator | 19:42:12.467 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-07-12 19:42:12.467831 | orchestrator | 19:42:12.467 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-07-12 19:42:12.467870 | orchestrator | 19:42:12.467 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.467884 | orchestrator | 19:42:12.467 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.467916 | orchestrator | 19:42:12.467 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.467947 | orchestrator | 19:42:12.467 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.467984 | orchestrator | 19:42:12.467 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.468025 | orchestrator | 19:42:12.467 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-07-12 19:42:12.468057 | orchestrator | 19:42:12.468 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.468069 | orchestrator | 19:42:12.468 STDOUT terraform:  + size = 80 2025-07-12 19:42:12.468106 | orchestrator | 19:42:12.468 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.468120 | orchestrator | 19:42:12.468 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.468130 | orchestrator | 19:42:12.468 STDOUT terraform:  } 2025-07-12 19:42:12.468529 | orchestrator | 19:42:12.468 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-07-12 19:42:12.468554 | orchestrator | 19:42:12.468 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:42:12.468564 | orchestrator | 19:42:12.468 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.468574 | orchestrator | 19:42:12.468 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.468583 | orchestrator | 19:42:12.468 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.468593 | orchestrator | 19:42:12.468 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.468603 | orchestrator | 19:42:12.468 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.468617 | orchestrator | 19:42:12.468 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-07-12 19:42:12.468627 | orchestrator | 19:42:12.468 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.468636 | orchestrator | 19:42:12.468 STDOUT terraform:  + size = 80 2025-07-12 19:42:12.468646 | orchestrator | 19:42:12.468 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.468658 | orchestrator | 19:42:12.468 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.468668 | orchestrator | 19:42:12.468 STDOUT terraform:  } 2025-07-12 19:42:12.468982 | orchestrator | 19:42:12.468 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-07-12 19:42:12.469023 | orchestrator | 19:42:12.468 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:42:12.469059 | orchestrator | 19:42:12.469 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.469090 | orchestrator | 19:42:12.469 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.469103 | orchestrator | 19:42:12.469 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.469133 | orchestrator | 19:42:12.469 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.469172 | orchestrator | 19:42:12.469 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.469242 | orchestrator | 19:42:12.469 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-07-12 19:42:12.469258 | orchestrator | 19:42:12.469 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.469270 | orchestrator | 19:42:12.469 STDOUT terraform:  + size = 80 2025-07-12 19:42:12.469311 | orchestrator | 19:42:12.469 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.469326 | orchestrator | 19:42:12.469 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.469336 | orchestrator | 19:42:12.469 STDOUT terraform:  } 2025-07-12 19:42:12.469702 | orchestrator | 19:42:12.469 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-07-12 19:42:12.469728 | orchestrator | 19:42:12.469 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:42:12.469739 | orchestrator | 19:42:12.469 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.469749 | orchestrator | 19:42:12.469 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.469758 | orchestrator | 19:42:12.469 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.469805 | orchestrator | 19:42:12.469 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.469841 | orchestrator | 19:42:12.469 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.469889 | orchestrator | 19:42:12.469 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-07-12 19:42:12.469901 | orchestrator | 19:42:12.469 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.469911 | orchestrator | 19:42:12.469 STDOUT terraform:  + size = 80 2025-07-12 19:42:12.469920 | orchestrator | 19:42:12.469 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.469930 | orchestrator | 19:42:12.469 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.469943 | orchestrator | 19:42:12.469 STDOUT terraform:  } 2025-07-12 19:42:12.470227 | orchestrator | 19:42:12.470 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-07-12 19:42:12.470268 | orchestrator | 19:42:12.470 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:42:12.470306 | orchestrator | 19:42:12.470 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.470320 | orchestrator | 19:42:12.470 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.470373 | orchestrator | 19:42:12.470 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.470388 | orchestrator | 19:42:12.470 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.470425 | orchestrator | 19:42:12.470 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.470453 | orchestrator | 19:42:12.470 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-07-12 19:42:12.470515 | orchestrator | 19:42:12.470 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.470528 | orchestrator | 19:42:12.470 STDOUT terraform:  + size = 80 2025-07-12 19:42:12.470541 | orchestrator | 19:42:12.470 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.470551 | orchestrator | 19:42:12.470 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.470564 | orchestrator | 19:42:12.470 STDOUT terraform:  } 2025-07-12 19:42:12.470941 | orchestrator | 19:42:12.470 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-07-12 19:42:12.470965 | orchestrator | 19:42:12.470 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:42:12.470975 | orchestrator | 19:42:12.470 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.471002 | orchestrator | 19:42:12.470 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.471013 | orchestrator | 19:42:12.470 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.471023 | orchestrator | 19:42:12.470 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.471037 | orchestrator | 19:42:12.470 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.471046 | orchestrator | 19:42:12.470 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-07-12 19:42:12.471056 | orchestrator | 19:42:12.470 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.471066 | orchestrator | 19:42:12.471 STDOUT terraform:  + size = 80 2025-07-12 19:42:12.471078 | orchestrator | 19:42:12.471 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.471088 | orchestrator | 19:42:12.471 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.471098 | orchestrator | 19:42:12.471 STDOUT terraform:  } 2025-07-12 19:42:12.471320 | orchestrator | 19:42:12.471 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-07-12 19:42:12.471370 | orchestrator | 19:42:12.471 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-07-12 19:42:12.471398 | orchestrator | 19:42:12.471 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.471424 | orchestrator | 19:42:12.471 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.471459 | orchestrator | 19:42:12.471 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.471493 | orchestrator | 19:42:12.471 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.471527 | orchestrator | 19:42:12.471 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.471575 | orchestrator | 19:42:12.471 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-07-12 19:42:12.471606 | orchestrator | 19:42:12.471 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.471617 | orchestrator | 19:42:12.471 STDOUT terraform:  + size = 80 2025-07-12 19:42:12.471655 | orchestrator | 19:42:12.471 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.471663 | orchestrator | 19:42:12.471 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.471674 | orchestrator | 19:42:12.471 STDOUT terraform:  } 2025-07-12 19:42:12.471843 | orchestrator | 19:42:12.471 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-07-12 19:42:12.471895 | orchestrator | 19:42:12.471 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:42:12.471919 | orchestrator | 19:42:12.471 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.471942 | orchestrator | 19:42:12.471 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.472056 | orchestrator | 19:42:12.471 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.472064 | orchestrator | 19:42:12.471 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.472071 | orchestrator | 19:42:12.472 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-07-12 19:42:12.472079 | orchestrator | 19:42:12.472 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.472086 | orchestrator | 19:42:12.472 STDOUT terraform:  + size = 20 2025-07-12 19:42:12.472121 | orchestrator | 19:42:12.472 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.472131 | orchestrator | 19:42:12.472 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.472140 | orchestrator | 19:42:12.472 STDOUT terraform:  } 2025-07-12 19:42:12.472290 | orchestrator | 19:42:12.472 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-07-12 19:42:12.472340 | orchestrator | 19:42:12.472 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:42:12.472362 | orchestrator | 19:42:12.472 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.472386 | orchestrator | 19:42:12.472 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.472425 | orchestrator | 19:42:12.472 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.472455 | orchestrator | 19:42:12.472 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.472505 | orchestrator | 19:42:12.472 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-07-12 19:42:12.472534 | orchestrator | 19:42:12.472 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.472546 | orchestrator | 19:42:12.472 STDOUT terraform:  + size = 20 2025-07-12 19:42:12.472574 | orchestrator | 19:42:12.472 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.472584 | orchestrator | 19:42:12.472 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.472605 | orchestrator | 19:42:12.472 STDOUT terraform:  } 2025-07-12 19:42:12.472711 | orchestrator | 19:42:12.472 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-07-12 19:42:12.472758 | orchestrator | 19:42:12.472 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:42:12.472799 | orchestrator | 19:42:12.472 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.472832 | orchestrator | 19:42:12.472 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.472859 | orchestrator | 19:42:12.472 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.472888 | orchestrator | 19:42:12.472 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.472942 | orchestrator | 19:42:12.472 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-07-12 19:42:12.472953 | orchestrator | 19:42:12.472 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.472979 | orchestrator | 19:42:12.472 STDOUT terraform:  + size = 20 2025-07-12 19:42:12.473014 | orchestrator | 19:42:12.472 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.473024 | orchestrator | 19:42:12.472 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.473033 | orchestrator | 19:42:12.473 STDOUT terraform:  } 2025-07-12 19:42:12.473258 | orchestrator | 19:42:12.473 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-07-12 19:42:12.473276 | orchestrator | 19:42:12.473 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:42:12.473312 | orchestrator | 19:42:12.473 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.473332 | orchestrator | 19:42:12.473 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.473377 | orchestrator | 19:42:12.473 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.473407 | orchestrator | 19:42:12.473 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.473439 | orchestrator | 19:42:12.473 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-07-12 19:42:12.473488 | orchestrator | 19:42:12.473 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.473497 | orchestrator | 19:42:12.473 STDOUT terraform:  + size = 20 2025-07-12 19:42:12.473506 | orchestrator | 19:42:12.473 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.473531 | orchestrator | 19:42:12.473 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.473541 | orchestrator | 19:42:12.473 STDOUT terraform:  } 2025-07-12 19:42:12.473644 | orchestrator | 19:42:12.473 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-07-12 19:42:12.473694 | orchestrator | 19:42:12.473 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:42:12.473718 | orchestrator | 19:42:12.473 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.473743 | orchestrator | 19:42:12.473 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.473822 | orchestrator | 19:42:12.473 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.473872 | orchestrator | 19:42:12.473 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.473907 | orchestrator | 19:42:12.473 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-07-12 19:42:12.473934 | orchestrator | 19:42:12.473 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.473967 | orchestrator | 19:42:12.473 STDOUT terraform:  + size = 20 2025-07-12 19:42:12.473976 | orchestrator | 19:42:12.473 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.473999 | orchestrator | 19:42:12.473 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.474008 | orchestrator | 19:42:12.473 STDOUT terraform:  } 2025-07-12 19:42:12.474135 | orchestrator | 19:42:12.474 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-07-12 19:42:12.474179 | orchestrator | 19:42:12.474 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:42:12.474294 | orchestrator | 19:42:12.474 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.474306 | orchestrator | 19:42:12.474 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.474313 | orchestrator | 19:42:12.474 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.474319 | orchestrator | 19:42:12.474 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.474329 | orchestrator | 19:42:12.474 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-07-12 19:42:12.474366 | orchestrator | 19:42:12.474 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.474376 | orchestrator | 19:42:12.474 STDOUT terraform:  + size = 20 2025-07-12 19:42:12.474404 | orchestrator | 19:42:12.474 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.474427 | orchestrator | 19:42:12.474 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.474435 | orchestrator | 19:42:12.474 STDOUT terraform:  } 2025-07-12 19:42:12.474548 | orchestrator | 19:42:12.474 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-07-12 19:42:12.474595 | orchestrator | 19:42:12.474 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:42:12.474625 | orchestrator | 19:42:12.474 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.474660 | orchestrator | 19:42:12.474 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.474688 | orchestrator | 19:42:12.474 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.474719 | orchestrator | 19:42:12.474 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.474846 | orchestrator | 19:42:12.474 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-07-12 19:42:12.474863 | orchestrator | 19:42:12.474 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.474871 | orchestrator | 19:42:12.474 STDOUT terraform:  + size = 20 2025-07-12 19:42:12.474880 | orchestrator | 19:42:12.474 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.474887 | orchestrator | 19:42:12.474 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.474893 | orchestrator | 19:42:12.474 STDOUT terraform:  } 2025-07-12 19:42:12.475001 | orchestrator | 19:42:12.474 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-07-12 19:42:12.475048 | orchestrator | 19:42:12.474 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:42:12.475068 | orchestrator | 19:42:12.475 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.475095 | orchestrator | 19:42:12.475 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.475134 | orchestrator | 19:42:12.475 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.475164 | orchestrator | 19:42:12.475 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.475216 | orchestrator | 19:42:12.475 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-07-12 19:42:12.475227 | orchestrator | 19:42:12.475 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.475258 | orchestrator | 19:42:12.475 STDOUT terraform:  + size = 20 2025-07-12 19:42:12.475268 | orchestrator | 19:42:12.475 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.475288 | orchestrator | 19:42:12.475 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.475297 | orchestrator | 19:42:12.475 STDOUT terraform:  } 2025-07-12 19:42:12.475497 | orchestrator | 19:42:12.475 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-07-12 19:42:12.475527 | orchestrator | 19:42:12.475 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-07-12 19:42:12.475564 | orchestrator | 19:42:12.475 STDOUT terraform:  + attachment = (known after apply) 2025-07-12 19:42:12.475587 | orchestrator | 19:42:12.475 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.475633 | orchestrator | 19:42:12.475 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.475657 | orchestrator | 19:42:12.475 STDOUT terraform:  + metadata = (known after apply) 2025-07-12 19:42:12.485914 | orchestrator | 19:42:12.475 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-07-12 19:42:12.485963 | orchestrator | 19:42:12.485 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.485972 | orchestrator | 19:42:12.485 STDOUT terraform:  + size = 20 2025-07-12 19:42:12.486003 | orchestrator | 19:42:12.485 STDOUT terraform:  + volume_retype_policy = "never" 2025-07-12 19:42:12.486041 | orchestrator | 19:42:12.485 STDOUT terraform:  + volume_type = "ssd" 2025-07-12 19:42:12.486064 | orchestrator | 19:42:12.486 STDOUT terraform:  } 2025-07-12 19:42:12.486195 | orchestrator | 19:42:12.486 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-07-12 19:42:12.486233 | orchestrator | 19:42:12.486 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-07-12 19:42:12.486270 | orchestrator | 19:42:12.486 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:42:12.486305 | orchestrator | 19:42:12.486 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:42:12.486342 | orchestrator | 19:42:12.486 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:42:12.486375 | orchestrator | 19:42:12.486 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.486461 | orchestrator | 19:42:12.486 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.486481 | orchestrator | 19:42:12.486 STDOUT terraform:  + config_drive = true 2025-07-12 19:42:12.486487 | orchestrator | 19:42:12.486 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:42:12.486494 | orchestrator | 19:42:12.486 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:42:12.486500 | orchestrator | 19:42:12.486 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-07-12 19:42:12.486524 | orchestrator | 19:42:12.486 STDOUT terraform:  + force_delete = false 2025-07-12 19:42:12.486557 | orchestrator | 19:42:12.486 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:42:12.486593 | orchestrator | 19:42:12.486 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.486629 | orchestrator | 19:42:12.486 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.486662 | orchestrator | 19:42:12.486 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:42:12.486686 | orchestrator | 19:42:12.486 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:42:12.486716 | orchestrator | 19:42:12.486 STDOUT terraform:  + name = "testbed-manager" 2025-07-12 19:42:12.486738 | orchestrator | 19:42:12.486 STDOUT terraform:  + power_state = "active" 2025-07-12 19:42:12.486779 | orchestrator | 19:42:12.486 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.486815 | orchestrator | 19:42:12.486 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:42:12.486838 | orchestrator | 19:42:12.486 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:42:12.486871 | orchestrator | 19:42:12.486 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:42:12.486899 | orchestrator | 19:42:12.486 STDOUT terraform:  + user_data = (sensitive value) 2025-07-12 19:42:12.486908 | orchestrator | 19:42:12.486 STDOUT terraform:  + block_device { 2025-07-12 19:42:12.486937 | orchestrator | 19:42:12.486 STDOUT terraform:  + boot_index = 0 2025-07-12 19:42:12.486966 | orchestrator | 19:42:12.486 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:42:12.487003 | orchestrator | 19:42:12.486 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:42:12.487024 | orchestrator | 19:42:12.486 STDOUT terraform:  + multiattach = false 2025-07-12 19:42:12.487052 | orchestrator | 19:42:12.487 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:42:12.487090 | orchestrator | 19:42:12.487 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.487098 | orchestrator | 19:42:12.487 STDOUT terraform:  } 2025-07-12 19:42:12.487105 | orchestrator | 19:42:12.487 STDOUT terraform:  + network { 2025-07-12 19:42:12.487130 | orchestrator | 19:42:12.487 STDOUT terraform:  + access_network = false 2025-07-12 19:42:12.487161 | orchestrator | 19:42:12.487 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:42:12.487195 | orchestrator | 19:42:12.487 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:42:12.487223 | orchestrator | 19:42:12.487 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:42:12.487253 | orchestrator | 19:42:12.487 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:42:12.487284 | orchestrator | 19:42:12.487 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:42:12.487314 | orchestrator | 19:42:12.487 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.487322 | orchestrator | 19:42:12.487 STDOUT terraform:  } 2025-07-12 19:42:12.487330 | orchestrator | 19:42:12.487 STDOUT terraform:  } 2025-07-12 19:42:12.487390 | orchestrator | 19:42:12.487 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-07-12 19:42:12.487416 | orchestrator | 19:42:12.487 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:42:12.487448 | orchestrator | 19:42:12.487 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:42:12.487482 | orchestrator | 19:42:12.487 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:42:12.487564 | orchestrator | 19:42:12.487 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:42:12.487572 | orchestrator | 19:42:12.487 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.487578 | orchestrator | 19:42:12.487 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.487585 | orchestrator | 19:42:12.487 STDOUT terraform:  + config_drive = true 2025-07-12 19:42:12.487612 | orchestrator | 19:42:12.487 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:42:12.487644 | orchestrator | 19:42:12.487 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:42:12.487674 | orchestrator | 19:42:12.487 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:42:12.487697 | orchestrator | 19:42:12.487 STDOUT terraform:  + force_delete = false 2025-07-12 19:42:12.487728 | orchestrator | 19:42:12.487 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:42:12.487774 | orchestrator | 19:42:12.487 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.487807 | orchestrator | 19:42:12.487 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.487841 | orchestrator | 19:42:12.487 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:42:12.487900 | orchestrator | 19:42:12.487 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:42:12.487908 | orchestrator | 19:42:12.487 STDOUT terraform:  + name = "testbed-node-0" 2025-07-12 19:42:12.487915 | orchestrator | 19:42:12.487 STDOUT terraform:  + power_state = "active" 2025-07-12 19:42:12.487942 | orchestrator | 19:42:12.487 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.487987 | orchestrator | 19:42:12.487 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:42:12.488010 | orchestrator | 19:42:12.487 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:42:12.488043 | orchestrator | 19:42:12.488 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:42:12.488091 | orchestrator | 19:42:12.488 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:42:12.488101 | orchestrator | 19:42:12.488 STDOUT terraform:  + block_device { 2025-07-12 19:42:12.488127 | orchestrator | 19:42:12.488 STDOUT terraform:  + boot_index = 0 2025-07-12 19:42:12.488155 | orchestrator | 19:42:12.488 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:42:12.488184 | orchestrator | 19:42:12.488 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:42:12.488212 | orchestrator | 19:42:12.488 STDOUT terraform:  + multiattach = false 2025-07-12 19:42:12.488241 | orchestrator | 19:42:12.488 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:42:12.488278 | orchestrator | 19:42:12.488 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.488286 | orchestrator | 19:42:12.488 STDOUT terraform:  } 2025-07-12 19:42:12.488293 | orchestrator | 19:42:12.488 STDOUT terraform:  + network { 2025-07-12 19:42:12.488318 | orchestrator | 19:42:12.488 STDOUT terraform:  + access_network = false 2025-07-12 19:42:12.488360 | orchestrator | 19:42:12.488 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:42:12.488382 | orchestrator | 19:42:12.488 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:42:12.488411 | orchestrator | 19:42:12.488 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:42:12.488442 | orchestrator | 19:42:12.488 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:42:12.488472 | orchestrator | 19:42:12.488 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:42:12.488502 | orchestrator | 19:42:12.488 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.488510 | orchestrator | 19:42:12.488 STDOUT terraform:  } 2025-07-12 19:42:12.488517 | orchestrator | 19:42:12.488 STDOUT terraform:  } 2025-07-12 19:42:12.488563 | orchestrator | 19:42:12.488 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-07-12 19:42:12.488662 | orchestrator | 19:42:12.488 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:42:12.488672 | orchestrator | 19:42:12.488 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:42:12.488677 | orchestrator | 19:42:12.488 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:42:12.488698 | orchestrator | 19:42:12.488 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:42:12.488730 | orchestrator | 19:42:12.488 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.488754 | orchestrator | 19:42:12.488 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.488818 | orchestrator | 19:42:12.488 STDOUT terraform:  + config_drive = true 2025-07-12 19:42:12.488841 | orchestrator | 19:42:12.488 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:42:12.488876 | orchestrator | 19:42:12.488 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:42:12.488905 | orchestrator | 19:42:12.488 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:42:12.488929 | orchestrator | 19:42:12.488 STDOUT terraform:  + force_delete = false 2025-07-12 19:42:12.488961 | orchestrator | 19:42:12.488 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:42:12.488998 | orchestrator | 19:42:12.488 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.489032 | orchestrator | 19:42:12.488 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.489067 | orchestrator | 19:42:12.489 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:42:12.489091 | orchestrator | 19:42:12.489 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:42:12.489121 | orchestrator | 19:42:12.489 STDOUT terraform:  + name = "testbed-node-1" 2025-07-12 19:42:12.489145 | orchestrator | 19:42:12.489 STDOUT terraform:  + power_state = "active" 2025-07-12 19:42:12.489179 | orchestrator | 19:42:12.489 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.489213 | orchestrator | 19:42:12.489 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:42:12.489234 | orchestrator | 19:42:12.489 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:42:12.489267 | orchestrator | 19:42:12.489 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:42:12.489315 | orchestrator | 19:42:12.489 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:42:12.489323 | orchestrator | 19:42:12.489 STDOUT terraform:  + block_device { 2025-07-12 19:42:12.489352 | orchestrator | 19:42:12.489 STDOUT terraform:  + boot_index = 0 2025-07-12 19:42:12.489380 | orchestrator | 19:42:12.489 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:42:12.489411 | orchestrator | 19:42:12.489 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:42:12.489438 | orchestrator | 19:42:12.489 STDOUT terraform:  + multiattach = false 2025-07-12 19:42:12.489465 | orchestrator | 19:42:12.489 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:42:12.489504 | orchestrator | 19:42:12.489 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.489512 | orchestrator | 19:42:12.489 STDOUT terraform:  } 2025-07-12 19:42:12.489519 | orchestrator | 19:42:12.489 STDOUT terraform:  + network { 2025-07-12 19:42:12.489543 | orchestrator | 19:42:12.489 STDOUT terraform:  + access_network = false 2025-07-12 19:42:12.489572 | orchestrator | 19:42:12.489 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:42:12.489601 | orchestrator | 19:42:12.489 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:42:12.489631 | orchestrator | 19:42:12.489 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:42:12.489661 | orchestrator | 19:42:12.489 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:42:12.489700 | orchestrator | 19:42:12.489 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:42:12.489794 | orchestrator | 19:42:12.489 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.489801 | orchestrator | 19:42:12.489 STDOUT terraform:  } 2025-07-12 19:42:12.489807 | orchestrator | 19:42:12.489 STDOUT terraform:  } 2025-07-12 19:42:12.489815 | orchestrator | 19:42:12.489 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-07-12 19:42:12.489823 | orchestrator | 19:42:12.489 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:42:12.489872 | orchestrator | 19:42:12.489 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:42:12.489905 | orchestrator | 19:42:12.489 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:42:12.489938 | orchestrator | 19:42:12.489 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:42:12.489975 | orchestrator | 19:42:12.489 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.489984 | orchestrator | 19:42:12.489 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.490012 | orchestrator | 19:42:12.489 STDOUT terraform:  + config_drive = true 2025-07-12 19:42:12.490068 | orchestrator | 19:42:12.490 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:42:12.490102 | orchestrator | 19:42:12.490 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:42:12.490131 | orchestrator | 19:42:12.490 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:42:12.490155 | orchestrator | 19:42:12.490 STDOUT terraform:  + force_delete = false 2025-07-12 19:42:12.490187 | orchestrator | 19:42:12.490 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:42:12.490223 | orchestrator | 19:42:12.490 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.490258 | orchestrator | 19:42:12.490 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.490292 | orchestrator | 19:42:12.490 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:42:12.490316 | orchestrator | 19:42:12.490 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:42:12.490348 | orchestrator | 19:42:12.490 STDOUT terraform:  + name = "testbed-node-2" 2025-07-12 19:42:12.490372 | orchestrator | 19:42:12.490 STDOUT terraform:  + power_state = "active" 2025-07-12 19:42:12.490408 | orchestrator | 19:42:12.490 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.490441 | orchestrator | 19:42:12.490 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:42:12.490463 | orchestrator | 19:42:12.490 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:42:12.490500 | orchestrator | 19:42:12.490 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:42:12.490548 | orchestrator | 19:42:12.490 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:42:12.490566 | orchestrator | 19:42:12.490 STDOUT terraform:  + block_device { 2025-07-12 19:42:12.490573 | orchestrator | 19:42:12.490 STDOUT terraform:  + boot_index = 0 2025-07-12 19:42:12.494170 | orchestrator | 19:42:12.490 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:42:12.494195 | orchestrator | 19:42:12.490 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:42:12.494201 | orchestrator | 19:42:12.490 STDOUT terraform:  + multiattach = false 2025-07-12 19:42:12.494206 | orchestrator | 19:42:12.490 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:42:12.494211 | orchestrator | 19:42:12.490 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.494224 | orchestrator | 19:42:12.490 STDOUT terraform:  } 2025-07-12 19:42:12.494229 | orchestrator | 19:42:12.490 STDOUT terraform:  + network { 2025-07-12 19:42:12.494234 | orchestrator | 19:42:12.490 STDOUT terraform:  + access_network = false 2025-07-12 19:42:12.494239 | orchestrator | 19:42:12.491 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:42:12.494243 | orchestrator | 19:42:12.491 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:42:12.494248 | orchestrator | 19:42:12.491 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:42:12.494252 | orchestrator | 19:42:12.491 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:42:12.494257 | orchestrator | 19:42:12.491 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:42:12.494261 | orchestrator | 19:42:12.491 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.494266 | orchestrator | 19:42:12.491 STDOUT terraform:  } 2025-07-12 19:42:12.494272 | orchestrator | 19:42:12.491 STDOUT terraform:  } 2025-07-12 19:42:12.494277 | orchestrator | 19:42:12.491 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-07-12 19:42:12.494282 | orchestrator | 19:42:12.491 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:42:12.494287 | orchestrator | 19:42:12.491 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:42:12.494291 | orchestrator | 19:42:12.491 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:42:12.494296 | orchestrator | 19:42:12.491 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:42:12.494300 | orchestrator | 19:42:12.491 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.494305 | orchestrator | 19:42:12.491 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.494309 | orchestrator | 19:42:12.491 STDOUT terraform:  + config_drive = true 2025-07-12 19:42:12.494314 | orchestrator | 19:42:12.491 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:42:12.494318 | orchestrator | 19:42:12.491 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:42:12.494323 | orchestrator | 19:42:12.491 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:42:12.494328 | orchestrator | 19:42:12.491 STDOUT terraform:  + force_delete = false 2025-07-12 19:42:12.494332 | orchestrator | 19:42:12.491 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:42:12.494337 | orchestrator | 19:42:12.491 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.494341 | orchestrator | 19:42:12.491 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.494345 | orchestrator | 19:42:12.491 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:42:12.494350 | orchestrator | 19:42:12.491 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:42:12.494354 | orchestrator | 19:42:12.491 STDOUT terraform:  + name = "testbed-node-3" 2025-07-12 19:42:12.494359 | orchestrator | 19:42:12.491 STDOUT terraform:  + power_state = "active" 2025-07-12 19:42:12.494367 | orchestrator | 19:42:12.491 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.494372 | orchestrator | 19:42:12.491 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:42:12.494376 | orchestrator | 19:42:12.492 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:42:12.494389 | orchestrator | 19:42:12.492 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:42:12.494394 | orchestrator | 19:42:12.492 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:42:12.494403 | orchestrator | 19:42:12.492 STDOUT terraform:  + block_device { 2025-07-12 19:42:12.494408 | orchestrator | 19:42:12.492 STDOUT terraform:  + boot_index = 0 2025-07-12 19:42:12.494413 | orchestrator | 19:42:12.492 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:42:12.494417 | orchestrator | 19:42:12.492 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:42:12.494422 | orchestrator | 19:42:12.492 STDOUT terraform:  + multiattach = false 2025-07-12 19:42:12.494426 | orchestrator | 19:42:12.492 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:42:12.494431 | orchestrator | 19:42:12.492 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.494435 | orchestrator | 19:42:12.492 STDOUT terraform:  } 2025-07-12 19:42:12.494440 | orchestrator | 19:42:12.492 STDOUT terraform:  + network { 2025-07-12 19:42:12.494444 | orchestrator | 19:42:12.492 STDOUT terraform:  + access_network = false 2025-07-12 19:42:12.494449 | orchestrator | 19:42:12.492 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:42:12.494453 | orchestrator | 19:42:12.492 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:42:12.494458 | orchestrator | 19:42:12.492 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:42:12.494462 | orchestrator | 19:42:12.492 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:42:12.494467 | orchestrator | 19:42:12.492 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:42:12.494471 | orchestrator | 19:42:12.492 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.494476 | orchestrator | 19:42:12.492 STDOUT terraform:  } 2025-07-12 19:42:12.494480 | orchestrator | 19:42:12.492 STDOUT terraform:  } 2025-07-12 19:42:12.494485 | orchestrator | 19:42:12.492 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-07-12 19:42:12.494489 | orchestrator | 19:42:12.492 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:42:12.494494 | orchestrator | 19:42:12.492 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:42:12.494498 | orchestrator | 19:42:12.492 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:42:12.494503 | orchestrator | 19:42:12.492 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:42:12.494507 | orchestrator | 19:42:12.492 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.494512 | orchestrator | 19:42:12.492 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.494520 | orchestrator | 19:42:12.492 STDOUT terraform:  + config_drive = true 2025-07-12 19:42:12.494525 | orchestrator | 19:42:12.492 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:42:12.494529 | orchestrator | 19:42:12.492 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:42:12.494534 | orchestrator | 19:42:12.493 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:42:12.494538 | orchestrator | 19:42:12.493 STDOUT terraform:  + force_delete = false 2025-07-12 19:42:12.494543 | orchestrator | 19:42:12.493 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:42:12.494547 | orchestrator | 19:42:12.493 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.494552 | orchestrator | 19:42:12.493 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.494556 | orchestrator | 19:42:12.493 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:42:12.494561 | orchestrator | 19:42:12.493 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:42:12.494565 | orchestrator | 19:42:12.493 STDOUT terraform:  + name = "testbed-node-4" 2025-07-12 19:42:12.494573 | orchestrator | 19:42:12.493 STDOUT terraform:  + power_state = "active" 2025-07-12 19:42:12.494578 | orchestrator | 19:42:12.493 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.494582 | orchestrator | 19:42:12.493 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:42:12.494587 | orchestrator | 19:42:12.493 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:42:12.494591 | orchestrator | 19:42:12.493 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:42:12.494595 | orchestrator | 19:42:12.493 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:42:12.494600 | orchestrator | 19:42:12.493 STDOUT terraform:  + block_device { 2025-07-12 19:42:12.494605 | orchestrator | 19:42:12.493 STDOUT terraform:  + boot_index = 0 2025-07-12 19:42:12.494609 | orchestrator | 19:42:12.493 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:42:12.494614 | orchestrator | 19:42:12.493 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:42:12.494618 | orchestrator | 19:42:12.493 STDOUT terraform:  + multiattach = false 2025-07-12 19:42:12.494623 | orchestrator | 19:42:12.493 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:42:12.494627 | orchestrator | 19:42:12.493 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.494631 | orchestrator | 19:42:12.493 STDOUT terraform:  } 2025-07-12 19:42:12.494636 | orchestrator | 19:42:12.493 STDOUT terraform:  + network { 2025-07-12 19:42:12.494640 | orchestrator | 19:42:12.493 STDOUT terraform:  + access_network = false 2025-07-12 19:42:12.494645 | orchestrator | 19:42:12.493 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:42:12.494649 | orchestrator | 19:42:12.493 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:42:12.494654 | orchestrator | 19:42:12.493 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:42:12.494662 | orchestrator | 19:42:12.493 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:42:12.494666 | orchestrator | 19:42:12.493 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:42:12.494671 | orchestrator | 19:42:12.493 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.494675 | orchestrator | 19:42:12.493 STDOUT terraform:  } 2025-07-12 19:42:12.494680 | orchestrator | 19:42:12.493 STDOUT terraform:  } 2025-07-12 19:42:12.494685 | orchestrator | 19:42:12.493 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-07-12 19:42:12.494689 | orchestrator | 19:42:12.494 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-07-12 19:42:12.495254 | orchestrator | 19:42:12.494 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-07-12 19:42:12.495263 | orchestrator | 19:42:12.494 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-07-12 19:42:12.495268 | orchestrator | 19:42:12.494 STDOUT terraform:  + all_metadata = (known after apply) 2025-07-12 19:42:12.495272 | orchestrator | 19:42:12.494 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.495276 | orchestrator | 19:42:12.494 STDOUT terraform:  + availability_zone = "nova" 2025-07-12 19:42:12.495280 | orchestrator | 19:42:12.494 STDOUT terraform:  + config_drive = true 2025-07-12 19:42:12.495284 | orchestrator | 19:42:12.494 STDOUT terraform:  + created = (known after apply) 2025-07-12 19:42:12.495288 | orchestrator | 19:42:12.494 STDOUT terraform:  + flavor_id = (known after apply) 2025-07-12 19:42:12.495297 | orchestrator | 19:42:12.494 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-07-12 19:42:12.495304 | orchestrator | 19:42:12.495 STDOUT terraform:  + force_delete = false 2025-07-12 19:42:12.495308 | orchestrator | 19:42:12.495 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-07-12 19:42:12.495312 | orchestrator | 19:42:12.495 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.495316 | orchestrator | 19:42:12.495 STDOUT terraform:  + image_id = (known after apply) 2025-07-12 19:42:12.495320 | orchestrator | 19:42:12.495 STDOUT terraform:  + image_name = (known after apply) 2025-07-12 19:42:12.496352 | orchestrator | 19:42:12.495 STDOUT terraform:  + key_pair = "testbed" 2025-07-12 19:42:12.496363 | orchestrator | 19:42:12.495 STDOUT terraform:  + name = "testbed-node-5" 2025-07-12 19:42:12.496368 | orchestrator | 19:42:12.495 STDOUT terraform:  + power_state = "active" 2025-07-12 19:42:12.496372 | orchestrator | 19:42:12.495 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.496376 | orchestrator | 19:42:12.495 STDOUT terraform:  + security_groups = (known after apply) 2025-07-12 19:42:12.496380 | orchestrator | 19:42:12.495 STDOUT terraform:  + stop_before_destroy = false 2025-07-12 19:42:12.496384 | orchestrator | 19:42:12.495 STDOUT terraform:  + updated = (known after apply) 2025-07-12 19:42:12.496388 | orchestrator | 19:42:12.495 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-07-12 19:42:12.496393 | orchestrator | 19:42:12.495 STDOUT terraform:  + block_device { 2025-07-12 19:42:12.496404 | orchestrator | 19:42:12.495 STDOUT terraform:  + boot_index = 0 2025-07-12 19:42:12.496408 | orchestrator | 19:42:12.495 STDOUT terraform:  + delete_on_termination = false 2025-07-12 19:42:12.496412 | orchestrator | 19:42:12.495 STDOUT terraform:  + destination_type = "volume" 2025-07-12 19:42:12.496416 | orchestrator | 19:42:12.495 STDOUT terraform:  + multiattach = false 2025-07-12 19:42:12.496420 | orchestrator | 19:42:12.495 STDOUT terraform:  + source_type = "volume" 2025-07-12 19:42:12.496424 | orchestrator | 19:42:12.495 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.496428 | orchestrator | 19:42:12.495 STDOUT terraform:  } 2025-07-12 19:42:12.496432 | orchestrator | 19:42:12.495 STDOUT terraform:  + network { 2025-07-12 19:42:12.496436 | orchestrator | 19:42:12.495 STDOUT terraform:  + access_network = false 2025-07-12 19:42:12.496441 | orchestrator | 19:42:12.495 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-07-12 19:42:12.496445 | orchestrator | 19:42:12.495 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-07-12 19:42:12.496449 | orchestrator | 19:42:12.495 STDOUT terraform:  + mac = (known after apply) 2025-07-12 19:42:12.496453 | orchestrator | 19:42:12.495 STDOUT terraform:  + name = (known after apply) 2025-07-12 19:42:12.496457 | orchestrator | 19:42:12.495 STDOUT terraform:  + port = (known after apply) 2025-07-12 19:42:12.496461 | orchestrator | 19:42:12.495 STDOUT terraform:  + uuid = (known after apply) 2025-07-12 19:42:12.496465 | orchestrator | 19:42:12.495 STDOUT terraform:  } 2025-07-12 19:42:12.496469 | orchestrator | 19:42:12.495 STDOUT terraform:  } 2025-07-12 19:42:12.496473 | orchestrator | 19:42:12.495 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-07-12 19:42:12.496477 | orchestrator | 19:42:12.496 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-07-12 19:42:12.496481 | orchestrator | 19:42:12.496 STDOUT terraform:  + fingerprint = (known after apply) 2025-07-12 19:42:12.496485 | orchestrator | 19:42:12.496 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.496490 | orchestrator | 19:42:12.496 STDOUT terraform:  + name = "testbed" 2025-07-12 19:42:12.496494 | orchestrator | 19:42:12.496 STDOUT terraform:  + private_key = (sensitive value) 2025-07-12 19:42:12.496498 | orchestrator | 19:42:12.496 STDOUT terraform:  + public_key = (known after apply) 2025-07-12 19:42:12.496502 | orchestrator | 19:42:12.496 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.496510 | orchestrator | 19:42:12.496 STDOUT terraform:  + user_id = (known after apply) 2025-07-12 19:42:12.496514 | orchestrator | 19:42:12.496 STDOUT terraform:  } 2025-07-12 19:42:12.496518 | orchestrator | 19:42:12.496 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-07-12 19:42:12.497651 | orchestrator | 19:42:12.496 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:42:12.497665 | orchestrator | 19:42:12.496 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:42:12.497670 | orchestrator | 19:42:12.496 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.497681 | orchestrator | 19:42:12.496 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:42:12.497686 | orchestrator | 19:42:12.496 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.497690 | orchestrator | 19:42:12.496 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:42:12.497694 | orchestrator | 19:42:12.496 STDOUT terraform:  } 2025-07-12 19:42:12.497698 | orchestrator | 19:42:12.496 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-07-12 19:42:12.497702 | orchestrator | 19:42:12.496 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:42:12.497707 | orchestrator | 19:42:12.496 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:42:12.497711 | orchestrator | 19:42:12.496 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.497715 | orchestrator | 19:42:12.496 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:42:12.497719 | orchestrator | 19:42:12.496 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.497723 | orchestrator | 19:42:12.496 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:42:12.497727 | orchestrator | 19:42:12.496 STDOUT terraform:  } 2025-07-12 19:42:12.497731 | orchestrator | 19:42:12.496 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-07-12 19:42:12.497735 | orchestrator | 19:42:12.497 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:42:12.497739 | orchestrator | 19:42:12.497 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:42:12.497743 | orchestrator | 19:42:12.497 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.497747 | orchestrator | 19:42:12.497 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:42:12.497751 | orchestrator | 19:42:12.497 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.497755 | orchestrator | 19:42:12.497 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:42:12.497759 | orchestrator | 19:42:12.497 STDOUT terraform:  } 2025-07-12 19:42:12.497801 | orchestrator | 19:42:12.497 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-07-12 19:42:12.497806 | orchestrator | 19:42:12.497 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:42:12.497810 | orchestrator | 19:42:12.497 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:42:12.497814 | orchestrator | 19:42:12.497 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.497819 | orchestrator | 19:42:12.497 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:42:12.498772 | orchestrator | 19:42:12.497 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.498784 | orchestrator | 19:42:12.497 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:42:12.498788 | orchestrator | 19:42:12.497 STDOUT terraform:  } 2025-07-12 19:42:12.498792 | orchestrator | 19:42:12.497 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-07-12 19:42:12.498806 | orchestrator | 19:42:12.497 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:42:12.498810 | orchestrator | 19:42:12.497 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:42:12.498819 | orchestrator | 19:42:12.498 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.498824 | orchestrator | 19:42:12.498 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:42:12.498828 | orchestrator | 19:42:12.498 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.498831 | orchestrator | 19:42:12.498 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:42:12.498835 | orchestrator | 19:42:12.498 STDOUT terraform:  } 2025-07-12 19:42:12.498839 | orchestrator | 19:42:12.498 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-07-12 19:42:12.498843 | orchestrator | 19:42:12.498 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:42:12.498847 | orchestrator | 19:42:12.498 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:42:12.498851 | orchestrator | 19:42:12.498 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.498854 | orchestrator | 19:42:12.498 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:42:12.498858 | orchestrator | 19:42:12.498 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.498862 | orchestrator | 19:42:12.498 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:42:12.498865 | orchestrator | 19:42:12.498 STDOUT terraform:  } 2025-07-12 19:42:12.498869 | orchestrator | 19:42:12.498 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-07-12 19:42:12.498873 | orchestrator | 19:42:12.498 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:42:12.498877 | orchestrator | 19:42:12.498 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:42:12.498880 | orchestrator | 19:42:12.498 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.498884 | orchestrator | 19:42:12.498 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:42:12.498888 | orchestrator | 19:42:12.498 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.498892 | orchestrator | 19:42:12.498 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:42:12.498895 | orchestrator | 19:42:12.498 STDOUT terraform:  } 2025-07-12 19:42:12.498899 | orchestrator | 19:42:12.498 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-07-12 19:42:12.499851 | orchestrator | 19:42:12.498 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:42:12.499859 | orchestrator | 19:42:12.498 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:42:12.499863 | orchestrator | 19:42:12.498 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.499867 | orchestrator | 19:42:12.498 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:42:12.499871 | orchestrator | 19:42:12.499 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.499880 | orchestrator | 19:42:12.499 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:42:12.499884 | orchestrator | 19:42:12.499 STDOUT terraform:  } 2025-07-12 19:42:12.499887 | orchestrator | 19:42:12.499 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-07-12 19:42:12.499891 | orchestrator | 19:42:12.499 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-07-12 19:42:12.499895 | orchestrator | 19:42:12.499 STDOUT terraform:  + device = (known after apply) 2025-07-12 19:42:12.499899 | orchestrator | 19:42:12.499 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.499902 | orchestrator | 19:42:12.499 STDOUT terraform:  + instance_id = (known after apply) 2025-07-12 19:42:12.499906 | orchestrator | 19:42:12.499 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.499910 | orchestrator | 19:42:12.499 STDOUT terraform:  + volume_id = (known after apply) 2025-07-12 19:42:12.499913 | orchestrator | 19:42:12.499 STDOUT terraform:  } 2025-07-12 19:42:12.499921 | orchestrator | 19:42:12.499 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-07-12 19:42:12.499927 | orchestrator | 19:42:12.499 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-07-12 19:42:12.499931 | orchestrator | 19:42:12.499 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-12 19:42:12.499935 | orchestrator | 19:42:12.499 STDOUT terraform:  + floating_ip = (known after apply) 2025-07-12 19:42:12.499939 | orchestrator | 19:42:12.499 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.499942 | orchestrator | 19:42:12.499 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 19:42:12.499946 | orchestrator | 19:42:12.499 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.499950 | orchestrator | 19:42:12.499 STDOUT terraform:  } 2025-07-12 19:42:12.499953 | orchestrator | 19:42:12.499 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-07-12 19:42:12.499958 | orchestrator | 19:42:12.499 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-07-12 19:42:12.499962 | orchestrator | 19:42:12.499 STDOUT terraform:  + address = (known after apply) 2025-07-12 19:42:12.499966 | orchestrator | 19:42:12.499 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.499969 | orchestrator | 19:42:12.499 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-12 19:42:12.499973 | orchestrator | 19:42:12.499 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:42:12.499977 | orchestrator | 19:42:12.499 STDOUT terraform:  + fixed_ip = (known after apply) 2025-07-12 19:42:12.499981 | orchestrator | 19:42:12.499 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.499985 | orchestrator | 19:42:12.499 STDOUT terraform:  + pool = "public" 2025-07-12 19:42:12.499989 | orchestrator | 19:42:12.499 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 19:42:12.499992 | orchestrator | 19:42:12.499 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.500008 | orchestrator | 19:42:12.499 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:42:12.500047 | orchestrator | 19:42:12.500 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.500068 | orchestrator | 19:42:12.500 STDOUT terraform:  } 2025-07-12 19:42:12.500119 | orchestrator | 19:42:12.500 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-07-12 19:42:12.500169 | orchestrator | 19:42:12.500 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-07-12 19:42:12.500213 | orchestrator | 19:42:12.500 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:42:12.500258 | orchestrator | 19:42:12.500 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.500289 | orchestrator | 19:42:12.500 STDOUT terraform:  + availability_zone_hints = [ 2025-07-12 19:42:12.500311 | orchestrator | 19:42:12.500 STDOUT terraform:  + "nova", 2025-07-12 19:42:12.500333 | orchestrator | 19:42:12.500 STDOUT terraform:  ] 2025-07-12 19:42:12.500377 | orchestrator | 19:42:12.500 STDOUT terraform:  + dns_domain = (known after apply) 2025-07-12 19:42:12.500420 | orchestrator | 19:42:12.500 STDOUT terraform:  + external = (known after apply) 2025-07-12 19:42:12.500513 | orchestrator | 19:42:12.500 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.500561 | orchestrator | 19:42:12.500 STDOUT terraform:  + mtu = (known after apply) 2025-07-12 19:42:12.500606 | orchestrator | 19:42:12.500 STDOUT terraform:  + name = "net-testbed-management" 2025-07-12 19:42:12.500649 | orchestrator | 19:42:12.500 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:42:12.500712 | orchestrator | 19:42:12.500 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:42:12.500755 | orchestrator | 19:42:12.500 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.500837 | orchestrator | 19:42:12.500 STDOUT terraform:  + shared = (known after apply) 2025-07-12 19:42:12.500889 | orchestrator | 19:42:12.500 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.501051 | orchestrator | 19:42:12.500 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-07-12 19:42:12.501097 | orchestrator | 19:42:12.501 STDOUT terraform:  + segments (known after apply) 2025-07-12 19:42:12.501120 | orchestrator | 19:42:12.501 STDOUT terraform:  } 2025-07-12 19:42:12.501180 | orchestrator | 19:42:12.501 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-07-12 19:42:12.501235 | orchestrator | 19:42:12.501 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-07-12 19:42:12.501281 | orchestrator | 19:42:12.501 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:42:12.501325 | orchestrator | 19:42:12.501 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:42:12.501369 | orchestrator | 19:42:12.501 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:42:12.501411 | orchestrator | 19:42:12.501 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.501463 | orchestrator | 19:42:12.501 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:42:12.501516 | orchestrator | 19:42:12.501 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:42:12.501559 | orchestrator | 19:42:12.501 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:42:12.501602 | orchestrator | 19:42:12.501 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:42:12.501644 | orchestrator | 19:42:12.501 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.501688 | orchestrator | 19:42:12.501 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:42:12.501731 | orchestrator | 19:42:12.501 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:42:12.501790 | orchestrator | 19:42:12.501 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:42:12.501834 | orchestrator | 19:42:12.501 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:42:12.501880 | orchestrator | 19:42:12.501 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.501923 | orchestrator | 19:42:12.501 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:42:12.501966 | orchestrator | 19:42:12.501 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.501994 | orchestrator | 19:42:12.501 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.502101 | orchestrator | 19:42:12.502 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:42:12.502128 | orchestrator | 19:42:12.502 STDOUT terraform:  } 2025-07-12 19:42:12.502156 | orchestrator | 19:42:12.502 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.502192 | orchestrator | 19:42:12.502 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:42:12.502232 | orchestrator | 19:42:12.502 STDOUT terraform:  } 2025-07-12 19:42:12.502276 | orchestrator | 19:42:12.502 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:42:12.503247 | orchestrator | 19:42:12.503 STDOUT terraform:  + fixed_ip { 2025-07-12 19:42:12.503300 | orchestrator | 19:42:12.503 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-07-12 19:42:12.503339 | orchestrator | 19:42:12.503 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:42:12.503362 | orchestrator | 19:42:12.503 STDOUT terraform:  } 2025-07-12 19:42:12.503385 | orchestrator | 19:42:12.503 STDOUT terraform:  } 2025-07-12 19:42:12.503440 | orchestrator | 19:42:12.503 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-07-12 19:42:12.503493 | orchestrator | 19:42:12.503 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:42:12.503544 | orchestrator | 19:42:12.503 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:42:12.503589 | orchestrator | 19:42:12.503 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:42:12.503632 | orchestrator | 19:42:12.503 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:42:12.503677 | orchestrator | 19:42:12.503 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.503722 | orchestrator | 19:42:12.503 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:42:12.503812 | orchestrator | 19:42:12.503 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:42:12.503860 | orchestrator | 19:42:12.503 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:42:12.503907 | orchestrator | 19:42:12.503 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:42:12.503952 | orchestrator | 19:42:12.503 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.503995 | orchestrator | 19:42:12.503 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:42:12.504037 | orchestrator | 19:42:12.504 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:42:12.504081 | orchestrator | 19:42:12.504 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:42:12.504124 | orchestrator | 19:42:12.504 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:42:12.504167 | orchestrator | 19:42:12.504 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.504211 | orchestrator | 19:42:12.504 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:42:12.504317 | orchestrator | 19:42:12.504 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.505463 | orchestrator | 19:42:12.505 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.505518 | orchestrator | 19:42:12.505 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:42:12.505542 | orchestrator | 19:42:12.505 STDOUT terraform:  } 2025-07-12 19:42:12.505570 | orchestrator | 19:42:12.505 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.505607 | orchestrator | 19:42:12.505 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:42:12.505628 | orchestrator | 19:42:12.505 STDOUT terraform:  } 2025-07-12 19:42:12.505657 | orchestrator | 19:42:12.505 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.505695 | orchestrator | 19:42:12.505 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:42:12.505715 | orchestrator | 19:42:12.505 STDOUT terraform:  } 2025-07-12 19:42:12.505741 | orchestrator | 19:42:12.505 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.505815 | orchestrator | 19:42:12.505 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:42:12.505840 | orchestrator | 19:42:12.505 STDOUT terraform:  } 2025-07-12 19:42:12.505873 | orchestrator | 19:42:12.505 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:42:12.505896 | orchestrator | 19:42:12.505 STDOUT terraform:  + fixed_ip { 2025-07-12 19:42:12.505931 | orchestrator | 19:42:12.505 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-07-12 19:42:12.505971 | orchestrator | 19:42:12.505 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:42:12.505993 | orchestrator | 19:42:12.505 STDOUT terraform:  } 2025-07-12 19:42:12.506028 | orchestrator | 19:42:12.506 STDOUT terraform:  } 2025-07-12 19:42:12.506083 | orchestrator | 19:42:12.506 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-07-12 19:42:12.506135 | orchestrator | 19:42:12.506 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:42:12.506187 | orchestrator | 19:42:12.506 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:42:12.506230 | orchestrator | 19:42:12.506 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:42:12.506272 | orchestrator | 19:42:12.506 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:42:12.506315 | orchestrator | 19:42:12.506 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.506359 | orchestrator | 19:42:12.506 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:42:12.506402 | orchestrator | 19:42:12.506 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:42:12.506443 | orchestrator | 19:42:12.506 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:42:12.506546 | orchestrator | 19:42:12.506 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:42:12.506596 | orchestrator | 19:42:12.506 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.506639 | orchestrator | 19:42:12.506 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:42:12.506681 | orchestrator | 19:42:12.506 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:42:12.506725 | orchestrator | 19:42:12.506 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:42:12.506779 | orchestrator | 19:42:12.506 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:42:12.506823 | orchestrator | 19:42:12.506 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.506865 | orchestrator | 19:42:12.506 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:42:12.507041 | orchestrator | 19:42:12.507 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.507072 | orchestrator | 19:42:12.507 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.507110 | orchestrator | 19:42:12.507 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:42:12.507132 | orchestrator | 19:42:12.507 STDOUT terraform:  } 2025-07-12 19:42:12.507160 | orchestrator | 19:42:12.507 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.507198 | orchestrator | 19:42:12.507 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:42:12.507221 | orchestrator | 19:42:12.507 STDOUT terraform:  } 2025-07-12 19:42:12.507249 | orchestrator | 19:42:12.507 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.507285 | orchestrator | 19:42:12.507 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:42:12.507306 | orchestrator | 19:42:12.507 STDOUT terraform:  } 2025-07-12 19:42:12.507334 | orchestrator | 19:42:12.507 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.507371 | orchestrator | 19:42:12.507 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:42:12.507392 | orchestrator | 19:42:12.507 STDOUT terraform:  } 2025-07-12 19:42:12.507427 | orchestrator | 19:42:12.507 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:42:12.507449 | orchestrator | 19:42:12.507 STDOUT terraform:  + fixed_ip { 2025-07-12 19:42:12.507490 | orchestrator | 19:42:12.507 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-07-12 19:42:12.507528 | orchestrator | 19:42:12.507 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:42:12.507616 | orchestrator | 19:42:12.507 STDOUT terraform:  } 2025-07-12 19:42:12.507644 | orchestrator | 19:42:12.507 STDOUT terraform:  } 2025-07-12 19:42:12.507702 | orchestrator | 19:42:12.507 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-07-12 19:42:12.507756 | orchestrator | 19:42:12.507 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:42:12.507816 | orchestrator | 19:42:12.507 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:42:12.507860 | orchestrator | 19:42:12.507 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:42:12.507910 | orchestrator | 19:42:12.507 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:42:12.507955 | orchestrator | 19:42:12.507 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.507999 | orchestrator | 19:42:12.507 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:42:12.508044 | orchestrator | 19:42:12.508 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:42:12.508091 | orchestrator | 19:42:12.508 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:42:12.508134 | orchestrator | 19:42:12.508 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:42:12.508180 | orchestrator | 19:42:12.508 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.508224 | orchestrator | 19:42:12.508 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:42:12.508267 | orchestrator | 19:42:12.508 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:42:12.508310 | orchestrator | 19:42:12.508 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:42:12.508354 | orchestrator | 19:42:12.508 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:42:12.508398 | orchestrator | 19:42:12.508 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.508441 | orchestrator | 19:42:12.508 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:42:12.508517 | orchestrator | 19:42:12.508 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.508552 | orchestrator | 19:42:12.508 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.508589 | orchestrator | 19:42:12.508 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:42:12.508613 | orchestrator | 19:42:12.508 STDOUT terraform:  } 2025-07-12 19:42:12.508641 | orchestrator | 19:42:12.508 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.508737 | orchestrator | 19:42:12.508 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:42:12.508796 | orchestrator | 19:42:12.508 STDOUT terraform:  } 2025-07-12 19:42:12.508829 | orchestrator | 19:42:12.508 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.508870 | orchestrator | 19:42:12.508 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:42:12.508903 | orchestrator | 19:42:12.508 STDOUT terraform:  } 2025-07-12 19:42:12.508953 | orchestrator | 19:42:12.508 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.508991 | orchestrator | 19:42:12.508 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:42:12.509014 | orchestrator | 19:42:12.508 STDOUT terraform:  } 2025-07-12 19:42:12.509047 | orchestrator | 19:42:12.509 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:42:12.509068 | orchestrator | 19:42:12.509 STDOUT terraform:  + fixed_ip { 2025-07-12 19:42:12.509098 | orchestrator | 19:42:12.509 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-07-12 19:42:12.509135 | orchestrator | 19:42:12.509 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:42:12.509155 | orchestrator | 19:42:12.509 STDOUT terraform:  } 2025-07-12 19:42:12.509174 | orchestrator | 19:42:12.509 STDOUT terraform:  } 2025-07-12 19:42:12.509228 | orchestrator | 19:42:12.509 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-07-12 19:42:12.509278 | orchestrator | 19:42:12.509 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:42:12.509321 | orchestrator | 19:42:12.509 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:42:12.509364 | orchestrator | 19:42:12.509 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:42:12.509406 | orchestrator | 19:42:12.509 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:42:12.509447 | orchestrator | 19:42:12.509 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.509490 | orchestrator | 19:42:12.509 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:42:12.509532 | orchestrator | 19:42:12.509 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:42:12.509573 | orchestrator | 19:42:12.509 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:42:12.509614 | orchestrator | 19:42:12.509 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:42:12.509660 | orchestrator | 19:42:12.509 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.509702 | orchestrator | 19:42:12.509 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:42:12.509744 | orchestrator | 19:42:12.509 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:42:12.509853 | orchestrator | 19:42:12.509 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:42:12.509905 | orchestrator | 19:42:12.509 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:42:12.509950 | orchestrator | 19:42:12.509 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.509992 | orchestrator | 19:42:12.509 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:42:12.510049 | orchestrator | 19:42:12.510 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.510080 | orchestrator | 19:42:12.510 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.510116 | orchestrator | 19:42:12.510 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:42:12.510145 | orchestrator | 19:42:12.510 STDOUT terraform:  } 2025-07-12 19:42:12.510171 | orchestrator | 19:42:12.510 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.510206 | orchestrator | 19:42:12.510 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:42:12.510227 | orchestrator | 19:42:12.510 STDOUT terraform:  } 2025-07-12 19:42:12.510255 | orchestrator | 19:42:12.510 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.510289 | orchestrator | 19:42:12.510 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:42:12.510309 | orchestrator | 19:42:12.510 STDOUT terraform:  } 2025-07-12 19:42:12.510336 | orchestrator | 19:42:12.510 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.510370 | orchestrator | 19:42:12.510 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:42:12.510391 | orchestrator | 19:42:12.510 STDOUT terraform:  } 2025-07-12 19:42:12.510421 | orchestrator | 19:42:12.510 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:42:12.510442 | orchestrator | 19:42:12.510 STDOUT terraform:  + fixed_ip { 2025-07-12 19:42:12.510473 | orchestrator | 19:42:12.510 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-07-12 19:42:12.510508 | orchestrator | 19:42:12.510 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:42:12.510528 | orchestrator | 19:42:12.510 STDOUT terraform:  } 2025-07-12 19:42:12.510549 | orchestrator | 19:42:12.510 STDOUT terraform:  } 2025-07-12 19:42:12.510601 | orchestrator | 19:42:12.510 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-07-12 19:42:12.510651 | orchestrator | 19:42:12.510 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:42:12.510692 | orchestrator | 19:42:12.510 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:42:12.510834 | orchestrator | 19:42:12.510 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:42:12.510943 | orchestrator | 19:42:12.510 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:42:12.510991 | orchestrator | 19:42:12.510 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.511034 | orchestrator | 19:42:12.511 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:42:12.511081 | orchestrator | 19:42:12.511 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:42:12.511124 | orchestrator | 19:42:12.511 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:42:12.511169 | orchestrator | 19:42:12.511 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:42:12.511213 | orchestrator | 19:42:12.511 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.511259 | orchestrator | 19:42:12.511 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:42:12.511302 | orchestrator | 19:42:12.511 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:42:12.511344 | orchestrator | 19:42:12.511 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:42:12.511395 | orchestrator | 19:42:12.511 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:42:12.511438 | orchestrator | 19:42:12.511 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.511489 | orchestrator | 19:42:12.511 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:42:12.511532 | orchestrator | 19:42:12.511 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.511559 | orchestrator | 19:42:12.511 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.511594 | orchestrator | 19:42:12.511 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:42:12.511617 | orchestrator | 19:42:12.511 STDOUT terraform:  } 2025-07-12 19:42:12.511644 | orchestrator | 19:42:12.511 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.511680 | orchestrator | 19:42:12.511 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:42:12.511701 | orchestrator | 19:42:12.511 STDOUT terraform:  } 2025-07-12 19:42:12.511730 | orchestrator | 19:42:12.511 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.511803 | orchestrator | 19:42:12.511 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:42:12.511828 | orchestrator | 19:42:12.511 STDOUT terraform:  } 2025-07-12 19:42:12.511856 | orchestrator | 19:42:12.511 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.511892 | orchestrator | 19:42:12.511 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:42:12.511915 | orchestrator | 19:42:12.511 STDOUT terraform:  } 2025-07-12 19:42:12.511947 | orchestrator | 19:42:12.511 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:42:12.512037 | orchestrator | 19:42:12.512 STDOUT terraform:  + fixed_ip { 2025-07-12 19:42:12.512072 | orchestrator | 19:42:12.512 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-07-12 19:42:12.512111 | orchestrator | 19:42:12.512 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:42:12.512132 | orchestrator | 19:42:12.512 STDOUT terraform:  } 2025-07-12 19:42:12.512154 | orchestrator | 19:42:12.512 STDOUT terraform:  } 2025-07-12 19:42:12.512207 | orchestrator | 19:42:12.512 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-07-12 19:42:12.512261 | orchestrator | 19:42:12.512 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-07-12 19:42:12.512305 | orchestrator | 19:42:12.512 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:42:12.512348 | orchestrator | 19:42:12.512 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-07-12 19:42:12.512390 | orchestrator | 19:42:12.512 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-07-12 19:42:12.512435 | orchestrator | 19:42:12.512 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.512478 | orchestrator | 19:42:12.512 STDOUT terraform:  + device_id = (known after apply) 2025-07-12 19:42:12.512520 | orchestrator | 19:42:12.512 STDOUT terraform:  + device_owner = (known after apply) 2025-07-12 19:42:12.512564 | orchestrator | 19:42:12.512 STDOUT terraform:  + dns_assignment = (known after apply) 2025-07-12 19:42:12.512620 | orchestrator | 19:42:12.512 STDOUT terraform:  + dns_name = (known after apply) 2025-07-12 19:42:12.512665 | orchestrator | 19:42:12.512 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.512707 | orchestrator | 19:42:12.512 STDOUT terraform:  + mac_address = (known after apply) 2025-07-12 19:42:12.512750 | orchestrator | 19:42:12.512 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:42:12.512806 | orchestrator | 19:42:12.512 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-07-12 19:42:12.512851 | orchestrator | 19:42:12.512 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-07-12 19:42:12.512903 | orchestrator | 19:42:12.512 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.512947 | orchestrator | 19:42:12.512 STDOUT terraform:  + security_group_ids = (known after apply) 2025-07-12 19:42:12.512989 | orchestrator | 19:42:12.512 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.513016 | orchestrator | 19:42:12.512 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.513052 | orchestrator | 19:42:12.513 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-07-12 19:42:12.513124 | orchestrator | 19:42:12.513 STDOUT terraform:  } 2025-07-12 19:42:12.513153 | orchestrator | 19:42:12.513 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.513191 | orchestrator | 19:42:12.513 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-07-12 19:42:12.513214 | orchestrator | 19:42:12.513 STDOUT terraform:  } 2025-07-12 19:42:12.513243 | orchestrator | 19:42:12.513 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.513279 | orchestrator | 19:42:12.513 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-07-12 19:42:12.513303 | orchestrator | 19:42:12.513 STDOUT terraform:  } 2025-07-12 19:42:12.513332 | orchestrator | 19:42:12.513 STDOUT terraform:  + allowed_address_pairs { 2025-07-12 19:42:12.513382 | orchestrator | 19:42:12.513 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-07-12 19:42:12.513415 | orchestrator | 19:42:12.513 STDOUT terraform:  } 2025-07-12 19:42:12.513449 | orchestrator | 19:42:12.513 STDOUT terraform:  + binding (known after apply) 2025-07-12 19:42:12.513471 | orchestrator | 19:42:12.513 STDOUT terraform:  + fixed_ip { 2025-07-12 19:42:12.513502 | orchestrator | 19:42:12.513 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-07-12 19:42:12.513538 | orchestrator | 19:42:12.513 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:42:12.513558 | orchestrator | 19:42:12.513 STDOUT terraform:  } 2025-07-12 19:42:12.513578 | orchestrator | 19:42:12.513 STDOUT terraform:  } 2025-07-12 19:42:12.513633 | orchestrator | 19:42:12.513 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-07-12 19:42:12.513687 | orchestrator | 19:42:12.513 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-07-12 19:42:12.513714 | orchestrator | 19:42:12.513 STDOUT terraform:  + force_destroy = false 2025-07-12 19:42:12.513750 | orchestrator | 19:42:12.513 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.513805 | orchestrator | 19:42:12.513 STDOUT terraform:  + port_id = (known after apply) 2025-07-12 19:42:12.513841 | orchestrator | 19:42:12.513 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.513876 | orchestrator | 19:42:12.513 STDOUT terraform:  + router_id = (known after apply) 2025-07-12 19:42:12.513911 | orchestrator | 19:42:12.513 STDOUT terraform:  + subnet_id = (known after apply) 2025-07-12 19:42:12.513931 | orchestrator | 19:42:12.513 STDOUT terraform:  } 2025-07-12 19:42:12.513973 | orchestrator | 19:42:12.513 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-07-12 19:42:12.514032 | orchestrator | 19:42:12.513 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-07-12 19:42:12.514079 | orchestrator | 19:42:12.514 STDOUT terraform:  + admin_state_up = (known after apply) 2025-07-12 19:42:12.514124 | orchestrator | 19:42:12.514 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.514211 | orchestrator | 19:42:12.514 STDOUT terraform:  + availability_zone_hints = [ 2025-07-12 19:42:12.514238 | orchestrator | 19:42:12.514 STDOUT terraform:  + "nova", 2025-07-12 19:42:12.514263 | orchestrator | 19:42:12.514 STDOUT terraform:  ] 2025-07-12 19:42:12.514306 | orchestrator | 19:42:12.514 STDOUT terraform:  + distributed = (known after apply) 2025-07-12 19:42:12.514350 | orchestrator | 19:42:12.514 STDOUT terraform:  + enable_snat = (known after apply) 2025-07-12 19:42:12.514406 | orchestrator | 19:42:12.514 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-07-12 19:42:12.514454 | orchestrator | 19:42:12.514 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-07-12 19:42:12.514499 | orchestrator | 19:42:12.514 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.514538 | orchestrator | 19:42:12.514 STDOUT terraform:  + name = "testbed" 2025-07-12 19:42:12.514583 | orchestrator | 19:42:12.514 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.514626 | orchestrator | 19:42:12.514 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.514662 | orchestrator | 19:42:12.514 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-07-12 19:42:12.514683 | orchestrator | 19:42:12.514 STDOUT terraform:  } 2025-07-12 19:42:12.514755 | orchestrator | 19:42:12.514 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-07-12 19:42:12.514852 | orchestrator | 19:42:12.514 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-07-12 19:42:12.514884 | orchestrator | 19:42:12.514 STDOUT terraform:  + description = "ssh" 2025-07-12 19:42:12.514920 | orchestrator | 19:42:12.514 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:42:12.514952 | orchestrator | 19:42:12.514 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:42:12.514995 | orchestrator | 19:42:12.514 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.515028 | orchestrator | 19:42:12.515 STDOUT terraform:  + port_range_max = 22 2025-07-12 19:42:12.515058 | orchestrator | 19:42:12.515 STDOUT terraform:  + port_range_min = 22 2025-07-12 19:42:12.515098 | orchestrator | 19:42:12.515 STDOUT terraform:  + protocol = "tcp" 2025-07-12 19:42:12.515142 | orchestrator | 19:42:12.515 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.515185 | orchestrator | 19:42:12.515 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:42:12.515226 | orchestrator | 19:42:12.515 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:42:12.515317 | orchestrator | 19:42:12.515 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:42:12.515362 | orchestrator | 19:42:12.515 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:42:12.515408 | orchestrator | 19:42:12.515 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.515430 | orchestrator | 19:42:12.515 STDOUT terraform:  } 2025-07-12 19:42:12.515490 | orchestrator | 19:42:12.515 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-07-12 19:42:12.515550 | orchestrator | 19:42:12.515 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-07-12 19:42:12.515586 | orchestrator | 19:42:12.515 STDOUT terraform:  + description = "wireguard" 2025-07-12 19:42:12.515622 | orchestrator | 19:42:12.515 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:42:12.515655 | orchestrator | 19:42:12.515 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:42:12.515700 | orchestrator | 19:42:12.515 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.515731 | orchestrator | 19:42:12.515 STDOUT terraform:  + port_range_max = 51820 2025-07-12 19:42:12.515779 | orchestrator | 19:42:12.515 STDOUT terraform:  + port_range_min = 51820 2025-07-12 19:42:12.515814 | orchestrator | 19:42:12.515 STDOUT terraform:  + protocol = "udp" 2025-07-12 19:42:12.515858 | orchestrator | 19:42:12.515 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.515900 | orchestrator | 19:42:12.515 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:42:12.515951 | orchestrator | 19:42:12.515 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:42:12.515990 | orchestrator | 19:42:12.515 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:42:12.516037 | orchestrator | 19:42:12.515 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:42:12.516084 | orchestrator | 19:42:12.516 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.516107 | orchestrator | 19:42:12.516 STDOUT terraform:  } 2025-07-12 19:42:12.516168 | orchestrator | 19:42:12.516 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-07-12 19:42:12.516228 | orchestrator | 19:42:12.516 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-07-12 19:42:12.516265 | orchestrator | 19:42:12.516 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:42:12.516297 | orchestrator | 19:42:12.516 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:42:12.516395 | orchestrator | 19:42:12.516 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.516433 | orchestrator | 19:42:12.516 STDOUT terraform:  + protocol = "tcp" 2025-07-12 19:42:12.516477 | orchestrator | 19:42:12.516 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.516520 | orchestrator | 19:42:12.516 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:42:12.516564 | orchestrator | 19:42:12.516 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:42:12.516606 | orchestrator | 19:42:12.516 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-12 19:42:12.516651 | orchestrator | 19:42:12.516 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:42:12.516695 | orchestrator | 19:42:12.516 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.516717 | orchestrator | 19:42:12.516 STDOUT terraform:  } 2025-07-12 19:42:12.516789 | orchestrator | 19:42:12.516 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-07-12 19:42:12.516849 | orchestrator | 19:42:12.516 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-07-12 19:42:12.516885 | orchestrator | 19:42:12.516 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:42:12.516920 | orchestrator | 19:42:12.516 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:42:12.516970 | orchestrator | 19:42:12.516 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.517018 | orchestrator | 19:42:12.516 STDOUT terraform:  + protocol = "udp" 2025-07-12 19:42:12.517063 | orchestrator | 19:42:12.517 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.517106 | orchestrator | 19:42:12.517 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:42:12.517128 | orchestrator | 19:42:12.517 STDOUT terraform:  + remot 2025-07-12 19:42:12.517218 | orchestrator | 19:42:12.517 STDOUT terraform: e_group_id = (known after apply) 2025-07-12 19:42:12.517262 | orchestrator | 19:42:12.517 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-07-12 19:42:12.517304 | orchestrator | 19:42:12.517 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:42:12.517348 | orchestrator | 19:42:12.517 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.517371 | orchestrator | 19:42:12.517 STDOUT terraform:  } 2025-07-12 19:42:12.517491 | orchestrator | 19:42:12.517 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-07-12 19:42:12.517566 | orchestrator | 19:42:12.517 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-07-12 19:42:12.517604 | orchestrator | 19:42:12.517 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:42:12.517639 | orchestrator | 19:42:12.517 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:42:12.517684 | orchestrator | 19:42:12.517 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.517753 | orchestrator | 19:42:12.517 STDOUT terraform:  + protocol = "icmp" 2025-07-12 19:42:12.517846 | orchestrator | 19:42:12.517 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.517890 | orchestrator | 19:42:12.517 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:42:12.517934 | orchestrator | 19:42:12.517 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:42:12.517993 | orchestrator | 19:42:12.517 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:42:12.518054 | orchestrator | 19:42:12.518 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:42:12.518100 | orchestrator | 19:42:12.518 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.518121 | orchestrator | 19:42:12.518 STDOUT terraform:  } 2025-07-12 19:42:12.518178 | orchestrator | 19:42:12.518 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-07-12 19:42:12.518235 | orchestrator | 19:42:12.518 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-07-12 19:42:12.518273 | orchestrator | 19:42:12.518 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:42:12.518305 | orchestrator | 19:42:12.518 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:42:12.518350 | orchestrator | 19:42:12.518 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.518380 | orchestrator | 19:42:12.518 STDOUT terraform:  + protocol = "tcp" 2025-07-12 19:42:12.518422 | orchestrator | 19:42:12.518 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.518465 | orchestrator | 19:42:12.518 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:42:12.518506 | orchestrator | 19:42:12.518 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:42:12.518605 | orchestrator | 19:42:12.518 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:42:12.518651 | orchestrator | 19:42:12.518 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:42:12.518694 | orchestrator | 19:42:12.518 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.518714 | orchestrator | 19:42:12.518 STDOUT terraform:  } 2025-07-12 19:42:12.518838 | orchestrator | 19:42:12.518 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-07-12 19:42:12.518899 | orchestrator | 19:42:12.518 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-07-12 19:42:12.518936 | orchestrator | 19:42:12.518 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:42:12.518968 | orchestrator | 19:42:12.518 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:42:12.519014 | orchestrator | 19:42:12.518 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.519048 | orchestrator | 19:42:12.519 STDOUT terraform:  + protocol = "udp" 2025-07-12 19:42:12.519093 | orchestrator | 19:42:12.519 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.519136 | orchestrator | 19:42:12.519 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:42:12.519187 | orchestrator | 19:42:12.519 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:42:12.519223 | orchestrator | 19:42:12.519 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:42:12.519265 | orchestrator | 19:42:12.519 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:42:12.519308 | orchestrator | 19:42:12.519 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.519328 | orchestrator | 19:42:12.519 STDOUT terraform:  } 2025-07-12 19:42:12.519384 | orchestrator | 19:42:12.519 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-07-12 19:42:12.519441 | orchestrator | 19:42:12.519 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-07-12 19:42:12.519482 | orchestrator | 19:42:12.519 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:42:12.519514 | orchestrator | 19:42:12.519 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:42:12.519558 | orchestrator | 19:42:12.519 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.519591 | orchestrator | 19:42:12.519 STDOUT terraform:  + protocol = "icmp" 2025-07-12 19:42:12.519691 | orchestrator | 19:42:12.519 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.519737 | orchestrator | 19:42:12.519 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:42:12.519795 | orchestrator | 19:42:12.519 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:42:12.519833 | orchestrator | 19:42:12.519 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:42:12.519876 | orchestrator | 19:42:12.519 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:42:12.519919 | orchestrator | 19:42:12.519 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.519939 | orchestrator | 19:42:12.519 STDOUT terraform:  } 2025-07-12 19:42:12.519994 | orchestrator | 19:42:12.519 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-07-12 19:42:12.520049 | orchestrator | 19:42:12.520 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-07-12 19:42:12.520080 | orchestrator | 19:42:12.520 STDOUT terraform:  + description = "vrrp" 2025-07-12 19:42:12.520116 | orchestrator | 19:42:12.520 STDOUT terraform:  + direction = "ingress" 2025-07-12 19:42:12.520148 | orchestrator | 19:42:12.520 STDOUT terraform:  + ethertype = "IPv4" 2025-07-12 19:42:12.520192 | orchestrator | 19:42:12.520 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.520224 | orchestrator | 19:42:12.520 STDOUT terraform:  + protocol = "112" 2025-07-12 19:42:12.520267 | orchestrator | 19:42:12.520 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.520310 | orchestrator | 19:42:12.520 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-07-12 19:42:12.520351 | orchestrator | 19:42:12.520 STDOUT terraform:  + remote_group_id = (known after apply) 2025-07-12 19:42:12.520388 | orchestrator | 19:42:12.520 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-07-12 19:42:12.520436 | orchestrator | 19:42:12.520 STDOUT terraform:  + security_group_id = (known after apply) 2025-07-12 19:42:12.520483 | orchestrator | 19:42:12.520 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.520503 | orchestrator | 19:42:12.520 STDOUT terraform:  } 2025-07-12 19:42:12.520558 | orchestrator | 19:42:12.520 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-07-12 19:42:12.520611 | orchestrator | 19:42:12.520 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-07-12 19:42:12.520646 | orchestrator | 19:42:12.520 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.520686 | orchestrator | 19:42:12.520 STDOUT terraform:  + description = "management security group" 2025-07-12 19:42:12.520779 | orchestrator | 19:42:12.520 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.520833 | orchestrator | 19:42:12.520 STDOUT terraform:  + name = "testbed-management" 2025-07-12 19:42:12.520868 | orchestrator | 19:42:12.520 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.520905 | orchestrator | 19:42:12.520 STDOUT terraform:  + stateful = (known after apply) 2025-07-12 19:42:12.520938 | orchestrator | 19:42:12.520 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.520958 | orchestrator | 19:42:12.520 STDOUT terraform:  } 2025-07-12 19:42:12.521011 | orchestrator | 19:42:12.520 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-07-12 19:42:12.521066 | orchestrator | 19:42:12.521 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-07-12 19:42:12.521100 | orchestrator | 19:42:12.521 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.521134 | orchestrator | 19:42:12.521 STDOUT terraform:  + description = "node security group" 2025-07-12 19:42:12.521168 | orchestrator | 19:42:12.521 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.521198 | orchestrator | 19:42:12.521 STDOUT terraform:  + name = "testbed-node" 2025-07-12 19:42:12.521233 | orchestrator | 19:42:12.521 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.521268 | orchestrator | 19:42:12.521 STDOUT terraform:  + stateful = (known after apply) 2025-07-12 19:42:12.521303 | orchestrator | 19:42:12.521 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.521324 | orchestrator | 19:42:12.521 STDOUT terraform:  } 2025-07-12 19:42:12.521373 | orchestrator | 19:42:12.521 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-07-12 19:42:12.521422 | orchestrator | 19:42:12.521 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-07-12 19:42:12.521459 | orchestrator | 19:42:12.521 STDOUT terraform:  + all_tags = (known after apply) 2025-07-12 19:42:12.521494 | orchestrator | 19:42:12.521 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-07-12 19:42:12.521521 | orchestrator | 19:42:12.521 STDOUT terraform:  + dns_nameservers = [ 2025-07-12 19:42:12.521544 | orchestrator | 19:42:12.521 STDOUT terraform:  + "8.8.8.8", 2025-07-12 19:42:12.521573 | orchestrator | 19:42:12.521 STDOUT terraform:  + "9.9.9.9", 2025-07-12 19:42:12.521593 | orchestrator | 19:42:12.521 STDOUT terraform:  ] 2025-07-12 19:42:12.521623 | orchestrator | 19:42:12.521 STDOUT terraform:  + enable_dhcp = true 2025-07-12 19:42:12.521659 | orchestrator | 19:42:12.521 STDOUT terraform:  + gateway_ip = (known after apply) 2025-07-12 19:42:12.521696 | orchestrator | 19:42:12.521 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.521722 | orchestrator | 19:42:12.521 STDOUT terraform:  + ip_version = 4 2025-07-12 19:42:12.521757 | orchestrator | 19:42:12.521 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-07-12 19:42:12.521869 | orchestrator | 19:42:12.521 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-07-12 19:42:12.521919 | orchestrator | 19:42:12.521 STDOUT terraform:  + name = "subnet-testbed-management" 2025-07-12 19:42:12.521957 | orchestrator | 19:42:12.521 STDOUT terraform:  + network_id = (known after apply) 2025-07-12 19:42:12.521985 | orchestrator | 19:42:12.521 STDOUT terraform:  + no_gateway = false 2025-07-12 19:42:12.522036 | orchestrator | 19:42:12.521 STDOUT terraform:  + region = (known after apply) 2025-07-12 19:42:12.522076 | orchestrator | 19:42:12.522 STDOUT terraform:  + service_types = (known after apply) 2025-07-12 19:42:12.522113 | orchestrator | 19:42:12.522 STDOUT terraform:  + tenant_id = (known after apply) 2025-07-12 19:42:12.522138 | orchestrator | 19:42:12.522 STDOUT terraform:  + allocation_pool { 2025-07-12 19:42:12.522169 | orchestrator | 19:42:12.522 STDOUT terraform:  + end = "192.168.31.250" 2025-07-12 19:42:12.522199 | orchestrator | 19:42:12.522 STDOUT terraform:  + start = "192.168.31.200" 2025-07-12 19:42:12.522222 | orchestrator | 19:42:12.522 STDOUT terraform:  } 2025-07-12 19:42:12.522242 | orchestrator | 19:42:12.522 STDOUT terraform:  } 2025-07-12 19:42:12.522272 | orchestrator | 19:42:12.522 STDOUT terraform:  # terraform_data.image will be created 2025-07-12 19:42:12.522303 | orchestrator | 19:42:12.522 STDOUT terraform:  + resource "terraform_data" "image" { 2025-07-12 19:42:12.522333 | orchestrator | 19:42:12.522 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.522359 | orchestrator | 19:42:12.522 STDOUT terraform:  + input = "OSISM CI" 2025-07-12 19:42:12.522390 | orchestrator | 19:42:12.522 STDOUT terraform:  + output = (known after apply) 2025-07-12 19:42:12.522410 | orchestrator | 19:42:12.522 STDOUT terraform:  } 2025-07-12 19:42:12.522445 | orchestrator | 19:42:12.522 STDOUT terraform:  # terraform_data.image_node will be created 2025-07-12 19:42:12.522480 | orchestrator | 19:42:12.522 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-07-12 19:42:12.522510 | orchestrator | 19:42:12.522 STDOUT terraform:  + id = (known after apply) 2025-07-12 19:42:12.522537 | orchestrator | 19:42:12.522 STDOUT terraform:  + input = "OSISM CI" 2025-07-12 19:42:12.522567 | orchestrator | 19:42:12.522 STDOUT terraform:  + output = (known after apply) 2025-07-12 19:42:12.522587 | orchestrator | 19:42:12.522 STDOUT terraform:  } 2025-07-12 19:42:12.522622 | orchestrator | 19:42:12.522 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-07-12 19:42:12.522650 | orchestrator | 19:42:12.522 STDOUT terraform: Changes to Outputs: 2025-07-12 19:42:12.522682 | orchestrator | 19:42:12.522 STDOUT terraform:  + manager_address = (sensitive value) 2025-07-12 19:42:12.522713 | orchestrator | 19:42:12.522 STDOUT terraform:  + private_key = (sensitive value) 2025-07-12 19:42:12.724181 | orchestrator | 19:42:12.723 STDOUT terraform: terraform_data.image: Creating... 2025-07-12 19:42:12.724285 | orchestrator | 19:42:12.724 STDOUT terraform: terraform_data.image_node: Creating... 2025-07-12 19:42:12.724311 | orchestrator | 19:42:12.724 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=a6506b28-c523-d10f-cef5-88326faf4d7b] 2025-07-12 19:42:12.724670 | orchestrator | 19:42:12.724 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=0a746f0f-4e0f-e0db-368b-5b606f90f972] 2025-07-12 19:42:12.742517 | orchestrator | 19:42:12.742 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-07-12 19:42:12.742951 | orchestrator | 19:42:12.742 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-07-12 19:42:12.752294 | orchestrator | 19:42:12.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-07-12 19:42:12.752453 | orchestrator | 19:42:12.752 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-07-12 19:42:12.753037 | orchestrator | 19:42:12.752 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-07-12 19:42:12.753097 | orchestrator | 19:42:12.753 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-07-12 19:42:12.753619 | orchestrator | 19:42:12.753 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-07-12 19:42:12.753792 | orchestrator | 19:42:12.753 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-07-12 19:42:12.758985 | orchestrator | 19:42:12.758 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-07-12 19:42:12.760559 | orchestrator | 19:42:12.760 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-07-12 19:42:13.208125 | orchestrator | 19:42:13.207 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=d2a2fe36-fd31-4877-956f-b3845278fd6a] 2025-07-12 19:42:13.447633 | orchestrator | 19:42:13.213 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-07-12 19:42:13.447721 | orchestrator | 19:42:13.221 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=d2a2fe36-fd31-4877-956f-b3845278fd6a] 2025-07-12 19:42:13.447741 | orchestrator | 19:42:13.230 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-07-12 19:42:13.447813 | orchestrator | 19:42:13.299 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-07-12 19:42:13.447827 | orchestrator | 19:42:13.306 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-07-12 19:42:13.841198 | orchestrator | 19:42:13.840 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=ce37a0b2-b0c7-4f4b-b439-a42f918a4529] 2025-07-12 19:42:13.854681 | orchestrator | 19:42:13.854 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-07-12 19:42:16.400068 | orchestrator | 19:42:16.399 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=cbc49688-9ad7-4fd0-a52c-a19b0583b25c] 2025-07-12 19:42:16.411879 | orchestrator | 19:42:16.411 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=1628f950-5804-44ef-9d42-f709daecc346] 2025-07-12 19:42:16.415542 | orchestrator | 19:42:16.415 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-07-12 19:42:16.423359 | orchestrator | 19:42:16.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=9f08906f-6338-431f-a878-f727643915a4] 2025-07-12 19:42:16.423465 | orchestrator | 19:42:16.423 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-07-12 19:42:16.425872 | orchestrator | 19:42:16.425 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=d5652225-c6ef-49dc-a608-4c92c2a71dd6] 2025-07-12 19:42:16.431176 | orchestrator | 19:42:16.431 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-07-12 19:42:16.433393 | orchestrator | 19:42:16.433 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-07-12 19:42:16.437260 | orchestrator | 19:42:16.437 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=1d5b9d5f-7727-4753-bdb1-c3a309291ad5] 2025-07-12 19:42:16.455460 | orchestrator | 19:42:16.455 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-07-12 19:42:16.483423 | orchestrator | 19:42:16.483 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=fe3c3c4e-2b96-4bec-8093-d77b3db985a2] 2025-07-12 19:42:16.493003 | orchestrator | 19:42:16.492 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=736d04ae-95cc-4835-aff1-6fbe44d77808] 2025-07-12 19:42:16.493152 | orchestrator | 19:42:16.492 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-07-12 19:42:16.506652 | orchestrator | 19:42:16.506 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-07-12 19:42:16.506900 | orchestrator | 19:42:16.506 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=47b67cf6-6134-4ebc-b4bd-75f5912c51d1] 2025-07-12 19:42:16.515060 | orchestrator | 19:42:16.514 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=14091d78082848e4ad8597d175b0aa275b0e3da4] 2025-07-12 19:42:16.517535 | orchestrator | 19:42:16.517 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-07-12 19:42:16.521011 | orchestrator | 19:42:16.520 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=b44fe0497486ef158baac7969f7b3702e6881d50] 2025-07-12 19:42:16.522139 | orchestrator | 19:42:16.521 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-07-12 19:42:16.532501 | orchestrator | 19:42:16.532 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=e02eada2-9691-4994-b44c-0b327a73be9a] 2025-07-12 19:42:17.204009 | orchestrator | 19:42:17.203 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=5410106d-ed3b-4664-9779-6ad1cc9646b0] 2025-07-12 19:42:17.518264 | orchestrator | 19:42:17.517 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=a17d6a30-4d69-4934-996e-d435cfafd3f3] 2025-07-12 19:42:17.533277 | orchestrator | 19:42:17.529 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-07-12 19:42:19.815231 | orchestrator | 19:42:19.814 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=3da5b399-01e7-4def-b33a-29c13319e0e2] 2025-07-12 19:42:19.823523 | orchestrator | 19:42:19.823 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=2900e8ba-3a3c-419f-a89d-80346bc85f37] 2025-07-12 19:42:19.871848 | orchestrator | 19:42:19.871 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=a9eb58a9-7a8d-4884-8549-7422e45233bf] 2025-07-12 19:42:19.877272 | orchestrator | 19:42:19.877 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=62f31422-022f-413d-8784-b59e1dab1027] 2025-07-12 19:42:19.879972 | orchestrator | 19:42:19.879 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=956b92a8-e2a8-4c28-b21e-590538c1fc3c] 2025-07-12 19:42:19.920980 | orchestrator | 19:42:19.920 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=27592039-7138-4b46-93ea-5de96d5c100b] 2025-07-12 19:42:20.050618 | orchestrator | 19:42:20.050 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 2s [id=2e415cf4-21f4-4cc4-9650-91b0cc26caee] 2025-07-12 19:42:20.056902 | orchestrator | 19:42:20.056 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-07-12 19:42:20.057230 | orchestrator | 19:42:20.056 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-07-12 19:42:20.063429 | orchestrator | 19:42:20.063 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-07-12 19:42:20.231335 | orchestrator | 19:42:20.231 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=624ec12a-19fb-427d-bd54-1f51561bc09e] 2025-07-12 19:42:20.244284 | orchestrator | 19:42:20.243 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-07-12 19:42:20.250551 | orchestrator | 19:42:20.250 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-07-12 19:42:20.257709 | orchestrator | 19:42:20.257 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-07-12 19:42:20.260820 | orchestrator | 19:42:20.260 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-07-12 19:42:20.261872 | orchestrator | 19:42:20.261 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-07-12 19:42:20.265659 | orchestrator | 19:42:20.265 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-07-12 19:42:20.267263 | orchestrator | 19:42:20.267 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-07-12 19:42:20.268947 | orchestrator | 19:42:20.268 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-07-12 19:42:20.278061 | orchestrator | 19:42:20.277 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4d00be29-bfb2-4542-b42b-3d9efb0c8edb] 2025-07-12 19:42:20.285670 | orchestrator | 19:42:20.285 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-07-12 19:42:20.805355 | orchestrator | 19:42:20.805 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=ee2fbf57-f703-47b2-9a96-c3b00685d847] 2025-07-12 19:42:20.817340 | orchestrator | 19:42:20.817 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-07-12 19:42:20.850826 | orchestrator | 19:42:20.850 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=1409262b-fa78-4c33-b109-3c58f6325140] 2025-07-12 19:42:20.857281 | orchestrator | 19:42:20.856 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=0418679f-3a6c-4fe5-b5ef-ac5b0e343fd3] 2025-07-12 19:42:20.863057 | orchestrator | 19:42:20.862 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-07-12 19:42:20.865638 | orchestrator | 19:42:20.865 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-07-12 19:42:20.871551 | orchestrator | 19:42:20.871 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=20ec1b06-347a-4b04-90ad-47a55083f9e6] 2025-07-12 19:42:20.879707 | orchestrator | 19:42:20.879 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-07-12 19:42:20.908498 | orchestrator | 19:42:20.908 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=6d9852ef-a7cd-4f9d-8096-9ed4e05dba6e] 2025-07-12 19:42:20.914348 | orchestrator | 19:42:20.914 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-07-12 19:42:21.011344 | orchestrator | 19:42:21.010 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=afd15b43-79de-44c9-b7cd-71fd79afcabb] 2025-07-12 19:42:21.019538 | orchestrator | 19:42:21.019 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-07-12 19:42:21.190750 | orchestrator | 19:42:21.190 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=2cd9645f-2599-425b-b207-c5ceff46933b] 2025-07-12 19:42:21.198654 | orchestrator | 19:42:21.198 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-07-12 19:42:21.325942 | orchestrator | 19:42:21.325 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=36e6c650-25a7-4330-8607-960d1f419423] 2025-07-12 19:42:21.331189 | orchestrator | 19:42:21.330 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=59aa31e3-45dd-431c-8924-c6c5f0d82f28] 2025-07-12 19:42:21.440870 | orchestrator | 19:42:21.440 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=b08fe352-7e48-4c34-8739-49897e35b98f] 2025-07-12 19:42:21.473588 | orchestrator | 19:42:21.473 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 0s [id=391b19a8-8ed6-434c-bde9-6aafdeb0a3e0] 2025-07-12 19:42:21.517638 | orchestrator | 19:42:21.517 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 2s [id=c473efe7-5fff-4ab3-8027-32c040202e27] 2025-07-12 19:42:21.650505 | orchestrator | 19:42:21.650 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 1s [id=6c9c883b-0d88-447a-abb0-887a12507749] 2025-07-12 19:42:21.657096 | orchestrator | 19:42:21.656 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 2s [id=2705be62-bea9-4003-9c27-386cd13e5c5c] 2025-07-12 19:42:21.858599 | orchestrator | 19:42:21.858 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 2s [id=4f96f00e-eaf2-42b8-809e-6d946a7ea1b4] 2025-07-12 19:42:22.005189 | orchestrator | 19:42:22.004 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 2s [id=d691eb5d-e382-45f6-a725-d1de1559aaf3] 2025-07-12 19:42:24.799131 | orchestrator | 19:42:24.798 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=b488eb6a-0d29-4cfd-9184-79d77dcd2af6] 2025-07-12 19:42:24.827051 | orchestrator | 19:42:24.826 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-07-12 19:42:24.844288 | orchestrator | 19:42:24.844 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-07-12 19:42:24.849789 | orchestrator | 19:42:24.849 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-07-12 19:42:24.853416 | orchestrator | 19:42:24.853 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-07-12 19:42:24.854475 | orchestrator | 19:42:24.854 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-07-12 19:42:24.858445 | orchestrator | 19:42:24.858 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-07-12 19:42:24.862387 | orchestrator | 19:42:24.862 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-07-12 19:42:26.292389 | orchestrator | 19:42:26.291 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=093c8238-1e04-4be0-aa3f-0c4a6572fd06] 2025-07-12 19:42:26.303288 | orchestrator | 19:42:26.303 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-07-12 19:42:26.305744 | orchestrator | 19:42:26.305 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-07-12 19:42:26.316374 | orchestrator | 19:42:26.316 STDOUT terraform: local_file.inventory: Creating... 2025-07-12 19:42:26.318121 | orchestrator | 19:42:26.317 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=04f3b2225aac618f70d9843f2ce064cdcbed1433] 2025-07-12 19:42:26.321836 | orchestrator | 19:42:26.321 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=de616f10ed7534f9aabf0516685f1a0fe4178f67] 2025-07-12 19:42:27.216202 | orchestrator | 19:42:27.215 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=093c8238-1e04-4be0-aa3f-0c4a6572fd06] 2025-07-12 19:42:34.854048 | orchestrator | 19:42:34.853 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-07-12 19:42:34.854385 | orchestrator | 19:42:34.854 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-07-12 19:42:34.855158 | orchestrator | 19:42:34.854 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-07-12 19:42:34.860944 | orchestrator | 19:42:34.860 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-07-12 19:42:34.862902 | orchestrator | 19:42:34.862 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-07-12 19:42:34.863219 | orchestrator | 19:42:34.862 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-07-12 19:42:44.858979 | orchestrator | 19:42:44.858 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-07-12 19:42:44.859142 | orchestrator | 19:42:44.858 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-07-12 19:42:44.859222 | orchestrator | 19:42:44.859 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-07-12 19:42:44.861028 | orchestrator | 19:42:44.860 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-07-12 19:42:44.863220 | orchestrator | 19:42:44.862 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-07-12 19:42:44.863342 | orchestrator | 19:42:44.863 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-07-12 19:42:45.238275 | orchestrator | 19:42:45.237 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=1cb4f14b-a823-42eb-86f3-4d1e4673846f] 2025-07-12 19:42:45.649590 | orchestrator | 19:42:45.649 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=278229bc-dfe2-444f-9f0a-e155f115bdbe] 2025-07-12 19:42:45.775486 | orchestrator | 19:42:45.775 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=9af4995a-7a6a-448f-bee3-e9eebbd8aa98] 2025-07-12 19:42:54.863056 | orchestrator | 19:42:54.862 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2025-07-12 19:42:54.863259 | orchestrator | 19:42:54.863 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-07-12 19:42:54.863513 | orchestrator | 19:42:54.863 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-07-12 19:42:55.479944 | orchestrator | 19:42:55.479 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=6e4db401-31a5-4e26-8ffa-bd6e76b47f26] 2025-07-12 19:42:55.544344 | orchestrator | 19:42:55.543 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=b7effdd0-24e7-4e38-a7f6-274a33f7857b] 2025-07-12 19:42:55.592407 | orchestrator | 19:42:55.591 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=214760fe-2590-4ba9-8981-a4e00dace0cd] 2025-07-12 19:42:55.617559 | orchestrator | 19:42:55.617 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-07-12 19:42:55.624389 | orchestrator | 19:42:55.624 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=6285756390156110756] 2025-07-12 19:42:55.626912 | orchestrator | 19:42:55.626 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-07-12 19:42:55.634869 | orchestrator | 19:42:55.634 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-07-12 19:42:55.647233 | orchestrator | 19:42:55.646 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-07-12 19:42:55.648087 | orchestrator | 19:42:55.647 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-07-12 19:42:55.650120 | orchestrator | 19:42:55.649 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-07-12 19:42:55.650152 | orchestrator | 19:42:55.649 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-07-12 19:42:55.663261 | orchestrator | 19:42:55.663 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-07-12 19:42:55.666947 | orchestrator | 19:42:55.666 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-07-12 19:42:55.676255 | orchestrator | 19:42:55.675 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-07-12 19:42:55.681221 | orchestrator | 19:42:55.681 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-07-12 19:42:59.011065 | orchestrator | 19:42:59.010 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=278229bc-dfe2-444f-9f0a-e155f115bdbe/d5652225-c6ef-49dc-a608-4c92c2a71dd6] 2025-07-12 19:42:59.057584 | orchestrator | 19:42:59.057 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=278229bc-dfe2-444f-9f0a-e155f115bdbe/1628f950-5804-44ef-9d42-f709daecc346] 2025-07-12 19:42:59.062087 | orchestrator | 19:42:59.061 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=6e4db401-31a5-4e26-8ffa-bd6e76b47f26/fe3c3c4e-2b96-4bec-8093-d77b3db985a2] 2025-07-12 19:42:59.086504 | orchestrator | 19:42:59.086 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=6e4db401-31a5-4e26-8ffa-bd6e76b47f26/e02eada2-9691-4994-b44c-0b327a73be9a] 2025-07-12 19:42:59.088844 | orchestrator | 19:42:59.088 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 3s [id=9af4995a-7a6a-448f-bee3-e9eebbd8aa98/1d5b9d5f-7727-4753-bdb1-c3a309291ad5] 2025-07-12 19:42:59.309477 | orchestrator | 19:42:59.309 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=9af4995a-7a6a-448f-bee3-e9eebbd8aa98/736d04ae-95cc-4835-aff1-6fbe44d77808] 2025-07-12 19:43:05.164299 | orchestrator | 19:43:05.163 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=278229bc-dfe2-444f-9f0a-e155f115bdbe/9f08906f-6338-431f-a878-f727643915a4] 2025-07-12 19:43:05.170010 | orchestrator | 19:43:05.169 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=6e4db401-31a5-4e26-8ffa-bd6e76b47f26/47b67cf6-6134-4ebc-b4bd-75f5912c51d1] 2025-07-12 19:43:05.194563 | orchestrator | 19:43:05.194 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 9s [id=9af4995a-7a6a-448f-bee3-e9eebbd8aa98/cbc49688-9ad7-4fd0-a52c-a19b0583b25c] 2025-07-12 19:43:05.684360 | orchestrator | 19:43:05.684 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-07-12 19:43:15.685343 | orchestrator | 19:43:15.685 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-07-12 19:43:15.913280 | orchestrator | 19:43:15.912 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=6ce77929-5112-446f-8891-edcd39069917] 2025-07-12 19:43:15.942256 | orchestrator | 19:43:15.941 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-07-12 19:43:15.942338 | orchestrator | 19:43:15.942 STDOUT terraform: Outputs: 2025-07-12 19:43:15.942352 | orchestrator | 19:43:15.942 STDOUT terraform: manager_address = 2025-07-12 19:43:15.942373 | orchestrator | 19:43:15.942 STDOUT terraform: private_key = 2025-07-12 19:43:16.184911 | orchestrator | ok: Runtime: 0:01:12.880687 2025-07-12 19:43:16.222532 | 2025-07-12 19:43:16.222652 | TASK [Fetch manager address] 2025-07-12 19:43:16.696957 | orchestrator | ok 2025-07-12 19:43:16.707272 | 2025-07-12 19:43:16.707412 | TASK [Set manager_host address] 2025-07-12 19:43:16.783662 | orchestrator | ok 2025-07-12 19:43:16.795454 | 2025-07-12 19:43:16.795642 | LOOP [Update ansible collections] 2025-07-12 19:43:17.840356 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-12 19:43:17.840752 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 19:43:17.840820 | orchestrator | Starting galaxy collection install process 2025-07-12 19:43:17.840861 | orchestrator | Process install dependency map 2025-07-12 19:43:17.840897 | orchestrator | Starting collection install process 2025-07-12 19:43:17.840985 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-07-12 19:43:17.841028 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-07-12 19:43:17.841084 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-07-12 19:43:17.841162 | orchestrator | ok: Item: commons Runtime: 0:00:00.682827 2025-07-12 19:43:18.739480 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 19:43:18.739736 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-07-12 19:43:18.739800 | orchestrator | Starting galaxy collection install process 2025-07-12 19:43:18.739841 | orchestrator | Process install dependency map 2025-07-12 19:43:18.739880 | orchestrator | Starting collection install process 2025-07-12 19:43:18.739917 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-07-12 19:43:18.739974 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-07-12 19:43:18.740008 | orchestrator | osism.services:999.0.0 was installed successfully 2025-07-12 19:43:18.740059 | orchestrator | ok: Item: services Runtime: 0:00:00.606827 2025-07-12 19:43:18.766093 | 2025-07-12 19:43:18.766360 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-12 19:43:30.359597 | orchestrator | ok 2025-07-12 19:43:30.370290 | 2025-07-12 19:43:30.370435 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-12 19:44:30.414568 | orchestrator | ok 2025-07-12 19:44:30.425348 | 2025-07-12 19:44:30.425473 | TASK [Fetch manager ssh hostkey] 2025-07-12 19:44:32.005066 | orchestrator | Output suppressed because no_log was given 2025-07-12 19:44:32.021330 | 2025-07-12 19:44:32.021495 | TASK [Get ssh keypair from terraform environment] 2025-07-12 19:44:32.557573 | orchestrator | ok: Runtime: 0:00:00.010002 2025-07-12 19:44:32.573574 | 2025-07-12 19:44:32.573713 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-12 19:44:32.609804 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-07-12 19:44:32.618548 | 2025-07-12 19:44:32.618658 | TASK [Run manager part 0] 2025-07-12 19:44:33.813331 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 19:44:33.972436 | orchestrator | 2025-07-12 19:44:33.972495 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-07-12 19:44:33.972503 | orchestrator | 2025-07-12 19:44:33.972517 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-07-12 19:44:34.852396 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:34.852732 | orchestrator | 2025-07-12 19:44:34.852787 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-12 19:44:34.852799 | orchestrator | 2025-07-12 19:44:34.852810 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:44:36.733362 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:36.733485 | orchestrator | 2025-07-12 19:44:36.733495 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-12 19:44:37.388685 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:37.388755 | orchestrator | 2025-07-12 19:44:37.388788 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-12 19:44:37.444481 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:37.444530 | orchestrator | 2025-07-12 19:44:37.444540 | orchestrator | TASK [Update package cache] **************************************************** 2025-07-12 19:44:37.472386 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:37.472475 | orchestrator | 2025-07-12 19:44:37.472495 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-12 19:44:37.502956 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:37.503015 | orchestrator | 2025-07-12 19:44:37.503024 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-12 19:44:37.528519 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:37.528562 | orchestrator | 2025-07-12 19:44:37.528567 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-12 19:44:37.555251 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:37.555281 | orchestrator | 2025-07-12 19:44:37.555287 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-07-12 19:44:37.605779 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:37.605813 | orchestrator | 2025-07-12 19:44:37.605820 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-07-12 19:44:37.622701 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:44:37.622742 | orchestrator | 2025-07-12 19:44:37.622749 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-07-12 19:44:38.344057 | orchestrator | changed: [testbed-manager] 2025-07-12 19:44:38.344122 | orchestrator | 2025-07-12 19:44:38.344131 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-07-12 19:44:42.166349 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:42.166439 | orchestrator | 2025-07-12 19:44:42.166459 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-12 19:44:43.791684 | orchestrator | ok: [testbed-manager] 2025-07-12 19:44:43.791794 | orchestrator | 2025-07-12 19:44:43.791812 | orchestrator | TASK [Install required packages] *********************************************** 2025-07-12 19:44:59.488356 | orchestrator | changed: [testbed-manager] 2025-07-12 19:44:59.488437 | orchestrator | 2025-07-12 19:44:59.488455 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-07-12 19:45:09.186509 | orchestrator | changed: [testbed-manager] 2025-07-12 19:45:09.186554 | orchestrator | 2025-07-12 19:45:09.186562 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-12 19:45:09.235660 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:09.235701 | orchestrator | 2025-07-12 19:45:09.235709 | orchestrator | TASK [Get current user] ******************************************************** 2025-07-12 19:45:10.069287 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:10.069378 | orchestrator | 2025-07-12 19:45:10.069395 | orchestrator | TASK [Create venv directory] *************************************************** 2025-07-12 19:45:10.833173 | orchestrator | changed: [testbed-manager] 2025-07-12 19:45:10.833232 | orchestrator | 2025-07-12 19:45:10.833240 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-07-12 19:45:17.263569 | orchestrator | changed: [testbed-manager] 2025-07-12 19:45:17.263674 | orchestrator | 2025-07-12 19:45:17.263719 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-07-12 19:45:23.104897 | orchestrator | changed: [testbed-manager] 2025-07-12 19:45:23.104993 | orchestrator | 2025-07-12 19:45:23.105009 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-07-12 19:45:25.805155 | orchestrator | changed: [testbed-manager] 2025-07-12 19:45:25.805222 | orchestrator | 2025-07-12 19:45:25.805237 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-07-12 19:45:27.609337 | orchestrator | changed: [testbed-manager] 2025-07-12 19:45:27.609442 | orchestrator | 2025-07-12 19:45:27.609471 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-07-12 19:45:28.768008 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-12 19:45:28.768069 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-12 19:45:28.768077 | orchestrator | 2025-07-12 19:45:28.768085 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-07-12 19:45:28.814104 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-12 19:45:28.814209 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-12 19:45:28.814233 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-12 19:45:28.814253 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-12 19:45:31.356849 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-07-12 19:45:31.356915 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-07-12 19:45:31.356927 | orchestrator | 2025-07-12 19:45:31.356936 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-07-12 19:45:31.981204 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:31.981245 | orchestrator | 2025-07-12 19:45:31.981255 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-07-12 19:45:47.067427 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-07-12 19:45:47.067490 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-07-12 19:45:47.067500 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-07-12 19:45:47.067508 | orchestrator | 2025-07-12 19:45:47.067516 | orchestrator | TASK [Install local collections] *********************************************** 2025-07-12 19:45:49.876930 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-07-12 19:45:49.877015 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-07-12 19:45:49.877031 | orchestrator | 2025-07-12 19:45:49.877047 | orchestrator | PLAY [Create operator user] **************************************************** 2025-07-12 19:45:49.877060 | orchestrator | 2025-07-12 19:45:49.877071 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:45:51.333907 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:51.333998 | orchestrator | 2025-07-12 19:45:51.334047 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-12 19:45:51.383394 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:51.383477 | orchestrator | 2025-07-12 19:45:51.383488 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-12 19:45:51.457227 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:51.457320 | orchestrator | 2025-07-12 19:45:51.457338 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-12 19:45:52.183274 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:52.184035 | orchestrator | 2025-07-12 19:45:52.184056 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-12 19:45:52.844686 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:52.844733 | orchestrator | 2025-07-12 19:45:52.844743 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-12 19:45:54.193742 | orchestrator | ok: [testbed-manager] => (item=adm) 2025-07-12 19:45:54.193814 | orchestrator | ok: [testbed-manager] => (item=sudo) 2025-07-12 19:45:54.193840 | orchestrator | 2025-07-12 19:45:54.193849 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-12 19:45:55.415222 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:55.415347 | orchestrator | 2025-07-12 19:45:55.415395 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-12 19:45:57.221805 | orchestrator | ok: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:45:57.222513 | orchestrator | ok: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-07-12 19:45:57.222538 | orchestrator | ok: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:45:57.222550 | orchestrator | 2025-07-12 19:45:57.222585 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-12 19:45:57.279637 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:57.279706 | orchestrator | 2025-07-12 19:45:57.279717 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-12 19:45:57.892493 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:57.892586 | orchestrator | 2025-07-12 19:45:57.892603 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-12 19:45:57.964050 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:57.964115 | orchestrator | 2025-07-12 19:45:57.964125 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-12 19:45:58.857577 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:45:58.857677 | orchestrator | changed: [testbed-manager] 2025-07-12 19:45:58.857696 | orchestrator | 2025-07-12 19:45:58.857709 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-12 19:45:58.894941 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:58.895006 | orchestrator | 2025-07-12 19:45:58.895016 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-12 19:45:58.927845 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:58.927913 | orchestrator | 2025-07-12 19:45:58.927929 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-12 19:45:58.964088 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:58.964163 | orchestrator | 2025-07-12 19:45:58.964177 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-12 19:45:59.017238 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:45:59.017351 | orchestrator | 2025-07-12 19:45:59.017368 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-12 19:45:59.775349 | orchestrator | ok: [testbed-manager] 2025-07-12 19:45:59.775430 | orchestrator | 2025-07-12 19:45:59.775446 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-07-12 19:45:59.775459 | orchestrator | 2025-07-12 19:45:59.775470 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:46:01.298075 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:01.298173 | orchestrator | 2025-07-12 19:46:01.298192 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-07-12 19:46:02.281168 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:02.282066 | orchestrator | 2025-07-12 19:46:02.282102 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:46:02.282119 | orchestrator | testbed-manager : ok=33 changed=14 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-07-12 19:46:02.282131 | orchestrator | 2025-07-12 19:46:02.713607 | orchestrator | ok: Runtime: 0:01:29.452144 2025-07-12 19:46:02.732874 | 2025-07-12 19:46:02.733104 | TASK [Point out that the log in on the manager is now possible] 2025-07-12 19:46:02.782363 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-07-12 19:46:02.792569 | 2025-07-12 19:46:02.792703 | TASK [Point out that the following task takes some time and does not give any output] 2025-07-12 19:46:02.825476 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-07-12 19:46:02.833551 | 2025-07-12 19:46:02.833672 | TASK [Run manager part 1 + 2] 2025-07-12 19:46:03.649232 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-07-12 19:46:03.703978 | orchestrator | 2025-07-12 19:46:03.704027 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-07-12 19:46:03.704034 | orchestrator | 2025-07-12 19:46:03.704047 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:46:06.308530 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:06.308581 | orchestrator | 2025-07-12 19:46:06.308602 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-07-12 19:46:06.352556 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:46:06.352611 | orchestrator | 2025-07-12 19:46:06.352624 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-07-12 19:46:06.401084 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:06.401153 | orchestrator | 2025-07-12 19:46:06.401167 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 19:46:06.446879 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:06.446929 | orchestrator | 2025-07-12 19:46:06.446937 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 19:46:06.511313 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:06.511365 | orchestrator | 2025-07-12 19:46:06.511373 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 19:46:06.578956 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:06.579010 | orchestrator | 2025-07-12 19:46:06.579022 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 19:46:06.630605 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-07-12 19:46:06.630697 | orchestrator | 2025-07-12 19:46:06.630712 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 19:46:07.353450 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:07.353596 | orchestrator | 2025-07-12 19:46:07.353611 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 19:46:07.404297 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:46:07.404370 | orchestrator | 2025-07-12 19:46:07.404384 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 19:46:08.821683 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:08.821784 | orchestrator | 2025-07-12 19:46:08.821802 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 19:46:09.403777 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:09.403869 | orchestrator | 2025-07-12 19:46:09.403886 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 19:46:10.544824 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:10.544903 | orchestrator | 2025-07-12 19:46:10.544921 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 19:46:11.591282 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:11.591337 | orchestrator | 2025-07-12 19:46:11.591348 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-07-12 19:46:12.252853 | orchestrator | ok: [testbed-manager] 2025-07-12 19:46:12.253001 | orchestrator | 2025-07-12 19:46:12.253046 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-07-12 19:46:12.309577 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:46:12.309658 | orchestrator | 2025-07-12 19:46:12.309673 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-07-12 19:46:13.283690 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:13.283807 | orchestrator | 2025-07-12 19:46:13.283825 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-07-12 19:46:14.278417 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:14.278456 | orchestrator | 2025-07-12 19:46:14.278467 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-07-12 19:46:14.881205 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:14.881289 | orchestrator | 2025-07-12 19:46:14.881305 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-07-12 19:46:14.921532 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-07-12 19:46:14.921614 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-07-12 19:46:14.921654 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-07-12 19:46:14.921668 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-07-12 19:46:16.782003 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:16.782125 | orchestrator | 2025-07-12 19:46:16.782143 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-07-12 19:46:25.594415 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-07-12 19:46:25.594553 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-07-12 19:46:25.594591 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-07-12 19:46:25.594624 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-07-12 19:46:25.594665 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-07-12 19:46:25.594693 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-07-12 19:46:25.594721 | orchestrator | 2025-07-12 19:46:25.594751 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-07-12 19:46:26.680264 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:26.680304 | orchestrator | 2025-07-12 19:46:26.680311 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-07-12 19:46:26.729643 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:46:26.729685 | orchestrator | 2025-07-12 19:46:26.729696 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-07-12 19:46:30.029384 | orchestrator | changed: [testbed-manager] 2025-07-12 19:46:30.029500 | orchestrator | 2025-07-12 19:46:30.029524 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-07-12 19:46:30.067455 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:46:30.067491 | orchestrator | 2025-07-12 19:46:30.067497 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-07-12 19:47:31.278056 | orchestrator | changed: [testbed-manager] 2025-07-12 19:47:31.278141 | orchestrator | 2025-07-12 19:47:31.278158 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 19:47:32.263139 | orchestrator | ok: [testbed-manager] 2025-07-12 19:47:32.263216 | orchestrator | 2025-07-12 19:47:32.263235 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:47:32.263249 | orchestrator | testbed-manager : ok=21 changed=10 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-07-12 19:47:32.263261 | orchestrator | 2025-07-12 19:47:32.426313 | orchestrator | ok: Runtime: 0:01:29.206650 2025-07-12 19:47:32.445609 | 2025-07-12 19:47:32.445774 | TASK [Reboot manager] 2025-07-12 19:47:33.482663 | orchestrator | ok: Runtime: 0:00:00.466393 2025-07-12 19:47:33.502948 | 2025-07-12 19:47:33.503221 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-07-12 19:47:45.757934 | orchestrator | ok 2025-07-12 19:47:45.768020 | 2025-07-12 19:47:45.768169 | TASK [Wait a little longer for the manager so that everything is ready] 2025-07-12 19:48:45.812898 | orchestrator | ok 2025-07-12 19:48:45.822073 | 2025-07-12 19:48:45.822224 | TASK [Deploy manager + bootstrap nodes] 2025-07-12 19:48:47.267426 | orchestrator | 2025-07-12 19:48:47.267579 | orchestrator | # DEPLOY MANAGER 2025-07-12 19:48:47.267592 | orchestrator | 2025-07-12 19:48:47.267601 | orchestrator | + set -e 2025-07-12 19:48:47.267609 | orchestrator | + echo 2025-07-12 19:48:47.267617 | orchestrator | + echo '# DEPLOY MANAGER' 2025-07-12 19:48:47.267627 | orchestrator | + echo 2025-07-12 19:48:47.267661 | orchestrator | + cat /opt/manager-vars.sh 2025-07-12 19:48:47.269395 | orchestrator | export NUMBER_OF_NODES=6 2025-07-12 19:48:47.269445 | orchestrator | 2025-07-12 19:48:47.269457 | orchestrator | export CEPH_VERSION=reef 2025-07-12 19:48:47.269469 | orchestrator | export CONFIGURATION_VERSION=main 2025-07-12 19:48:47.269480 | orchestrator | export MANAGER_VERSION=9.2.0 2025-07-12 19:48:47.269502 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-07-12 19:48:47.269511 | orchestrator | 2025-07-12 19:48:47.269526 | orchestrator | export ARA=false 2025-07-12 19:48:47.269535 | orchestrator | export DEPLOY_MODE=manager 2025-07-12 19:48:47.269548 | orchestrator | export TEMPEST=false 2025-07-12 19:48:47.269560 | orchestrator | export IS_ZUUL=true 2025-07-12 19:48:47.269569 | orchestrator | 2025-07-12 19:48:47.269583 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 19:48:47.269593 | orchestrator | export EXTERNAL_API=false 2025-07-12 19:48:47.269602 | orchestrator | 2025-07-12 19:48:47.269611 | orchestrator | export IMAGE_USER=ubuntu 2025-07-12 19:48:47.269624 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-07-12 19:48:47.269633 | orchestrator | 2025-07-12 19:48:47.269642 | orchestrator | export CEPH_STACK=ceph-ansible 2025-07-12 19:48:47.269661 | orchestrator | 2025-07-12 19:48:47.269670 | orchestrator | + echo 2025-07-12 19:48:47.269678 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 19:48:47.270351 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 19:48:47.270381 | orchestrator | ++ INTERACTIVE=false 2025-07-12 19:48:47.270389 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 19:48:47.270399 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 19:48:47.270459 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 19:48:47.270465 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 19:48:47.270470 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 19:48:47.270474 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 19:48:47.270479 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 19:48:47.270483 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 19:48:47.270488 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 19:48:47.270492 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 19:48:47.270496 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 19:48:47.270501 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 19:48:47.270512 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 19:48:47.270517 | orchestrator | ++ export ARA=false 2025-07-12 19:48:47.270521 | orchestrator | ++ ARA=false 2025-07-12 19:48:47.270526 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 19:48:47.270530 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 19:48:47.270536 | orchestrator | ++ export TEMPEST=false 2025-07-12 19:48:47.270540 | orchestrator | ++ TEMPEST=false 2025-07-12 19:48:47.270544 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 19:48:47.270548 | orchestrator | ++ IS_ZUUL=true 2025-07-12 19:48:47.270552 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 19:48:47.270556 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 19:48:47.270561 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 19:48:47.270565 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 19:48:47.270569 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 19:48:47.270573 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 19:48:47.270577 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 19:48:47.270581 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 19:48:47.270586 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 19:48:47.270590 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 19:48:47.270594 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-07-12 19:48:47.325314 | orchestrator | + docker version 2025-07-12 19:48:47.524377 | orchestrator | Client: Docker Engine - Community 2025-07-12 19:48:47.524473 | orchestrator | Version: 27.5.1 2025-07-12 19:48:47.524490 | orchestrator | API version: 1.47 2025-07-12 19:48:47.524501 | orchestrator | Go version: go1.22.11 2025-07-12 19:48:47.524513 | orchestrator | Git commit: 9f9e405 2025-07-12 19:48:47.524524 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-12 19:48:47.524537 | orchestrator | OS/Arch: linux/amd64 2025-07-12 19:48:47.524548 | orchestrator | Context: default 2025-07-12 19:48:47.524559 | orchestrator | 2025-07-12 19:48:47.524570 | orchestrator | Server: Docker Engine - Community 2025-07-12 19:48:47.524581 | orchestrator | Engine: 2025-07-12 19:48:47.524593 | orchestrator | Version: 27.5.1 2025-07-12 19:48:47.524605 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-07-12 19:48:47.524647 | orchestrator | Go version: go1.22.11 2025-07-12 19:48:47.524659 | orchestrator | Git commit: 4c9b3b0 2025-07-12 19:48:47.524670 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-07-12 19:48:47.524681 | orchestrator | OS/Arch: linux/amd64 2025-07-12 19:48:47.524691 | orchestrator | Experimental: false 2025-07-12 19:48:47.524702 | orchestrator | containerd: 2025-07-12 19:48:47.524714 | orchestrator | Version: 1.7.27 2025-07-12 19:48:47.524725 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-07-12 19:48:47.524736 | orchestrator | runc: 2025-07-12 19:48:47.524796 | orchestrator | Version: 1.2.5 2025-07-12 19:48:47.524809 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-07-12 19:48:47.524820 | orchestrator | docker-init: 2025-07-12 19:48:47.524831 | orchestrator | Version: 0.19.0 2025-07-12 19:48:47.524843 | orchestrator | GitCommit: de40ad0 2025-07-12 19:48:47.527781 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-07-12 19:48:47.535999 | orchestrator | + set -e 2025-07-12 19:48:47.536060 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 19:48:47.536074 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 19:48:47.536085 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 19:48:47.536096 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 19:48:47.536107 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 19:48:47.536119 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 19:48:47.536131 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 19:48:47.536142 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 19:48:47.536153 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 19:48:47.536164 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 19:48:47.536175 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 19:48:47.536186 | orchestrator | ++ export ARA=false 2025-07-12 19:48:47.536210 | orchestrator | ++ ARA=false 2025-07-12 19:48:47.536222 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 19:48:47.536233 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 19:48:47.536243 | orchestrator | ++ export TEMPEST=false 2025-07-12 19:48:47.536254 | orchestrator | ++ TEMPEST=false 2025-07-12 19:48:47.536265 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 19:48:47.536276 | orchestrator | ++ IS_ZUUL=true 2025-07-12 19:48:47.536287 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 19:48:47.536298 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 19:48:47.536309 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 19:48:47.536320 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 19:48:47.536330 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 19:48:47.536341 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 19:48:47.536352 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 19:48:47.536363 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 19:48:47.536374 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 19:48:47.536385 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 19:48:47.536396 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 19:48:47.536407 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 19:48:47.536417 | orchestrator | ++ INTERACTIVE=false 2025-07-12 19:48:47.536428 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 19:48:47.536454 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 19:48:47.536465 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-07-12 19:48:47.536476 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 9.2.0 2025-07-12 19:48:47.543229 | orchestrator | + set -e 2025-07-12 19:48:47.543258 | orchestrator | + VERSION=9.2.0 2025-07-12 19:48:47.543273 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 9.2.0/g' /opt/configuration/environments/manager/configuration.yml 2025-07-12 19:48:47.551465 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-07-12 19:48:47.551497 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-07-12 19:48:47.555232 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-07-12 19:48:47.558797 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-07-12 19:48:47.567501 | orchestrator | /opt/configuration ~ 2025-07-12 19:48:47.567522 | orchestrator | + set -e 2025-07-12 19:48:47.567535 | orchestrator | + pushd /opt/configuration 2025-07-12 19:48:47.567547 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 19:48:47.568914 | orchestrator | + source /opt/venv/bin/activate 2025-07-12 19:48:47.570216 | orchestrator | ++ deactivate nondestructive 2025-07-12 19:48:47.570240 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:48:47.570254 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:48:47.570290 | orchestrator | ++ hash -r 2025-07-12 19:48:47.570301 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:48:47.570312 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-12 19:48:47.570323 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-12 19:48:47.570334 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-12 19:48:47.570345 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-12 19:48:47.570356 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-12 19:48:47.570604 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-12 19:48:47.570619 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-12 19:48:47.570631 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 19:48:47.570740 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 19:48:47.570792 | orchestrator | ++ export PATH 2025-07-12 19:48:47.570804 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:48:47.570815 | orchestrator | ++ '[' -z '' ']' 2025-07-12 19:48:47.570826 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-12 19:48:47.570837 | orchestrator | ++ PS1='(venv) ' 2025-07-12 19:48:47.570848 | orchestrator | ++ export PS1 2025-07-12 19:48:47.570858 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-12 19:48:47.570869 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-12 19:48:47.570880 | orchestrator | ++ hash -r 2025-07-12 19:48:47.570891 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-07-12 19:48:48.708212 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-07-12 19:48:48.709056 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.4) 2025-07-12 19:48:48.710340 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-07-12 19:48:48.711558 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-07-12 19:48:48.712596 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-07-12 19:48:48.722863 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-07-12 19:48:48.724326 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-07-12 19:48:48.725492 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-07-12 19:48:48.726844 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-07-12 19:48:48.757070 | orchestrator | Requirement already satisfied: charset_normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-07-12 19:48:48.758529 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-07-12 19:48:48.760315 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.5.0) 2025-07-12 19:48:48.761600 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.7.9) 2025-07-12 19:48:48.765643 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-07-12 19:48:48.967187 | orchestrator | ++ which gilt 2025-07-12 19:48:48.968845 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-07-12 19:48:48.968871 | orchestrator | + /opt/venv/bin/gilt overlay 2025-07-12 19:48:49.213667 | orchestrator | osism.cfg-generics: 2025-07-12 19:48:49.361889 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-07-12 19:48:49.361997 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-07-12 19:48:49.362012 | orchestrator | - copied (v0.20250709.0) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-07-12 19:48:49.362086 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-07-12 19:48:50.024997 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-07-12 19:48:50.036729 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-07-12 19:48:50.485116 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-07-12 19:48:50.534144 | orchestrator | ~ 2025-07-12 19:48:50.534229 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 19:48:50.534244 | orchestrator | + deactivate 2025-07-12 19:48:50.534257 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-12 19:48:50.534271 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 19:48:50.534282 | orchestrator | + export PATH 2025-07-12 19:48:50.534293 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-12 19:48:50.534305 | orchestrator | + '[' -n '' ']' 2025-07-12 19:48:50.534320 | orchestrator | + hash -r 2025-07-12 19:48:50.534331 | orchestrator | + '[' -n '' ']' 2025-07-12 19:48:50.534343 | orchestrator | + unset VIRTUAL_ENV 2025-07-12 19:48:50.534353 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-12 19:48:50.534365 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-12 19:48:50.534376 | orchestrator | + unset -f deactivate 2025-07-12 19:48:50.534387 | orchestrator | + popd 2025-07-12 19:48:50.536003 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-07-12 19:48:50.536022 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-07-12 19:48:50.536905 | orchestrator | ++ semver 9.2.0 7.0.0 2025-07-12 19:48:50.596736 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 19:48:50.596884 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-07-12 19:48:50.596902 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-07-12 19:48:50.702987 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 19:48:50.703106 | orchestrator | + source /opt/venv/bin/activate 2025-07-12 19:48:50.703120 | orchestrator | ++ deactivate nondestructive 2025-07-12 19:48:50.703133 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:48:50.703144 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:48:50.703156 | orchestrator | ++ hash -r 2025-07-12 19:48:50.703167 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:48:50.703206 | orchestrator | ++ unset VIRTUAL_ENV 2025-07-12 19:48:50.703219 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-07-12 19:48:50.703241 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-07-12 19:48:50.703515 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-07-12 19:48:50.703533 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-07-12 19:48:50.703545 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-07-12 19:48:50.703556 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-07-12 19:48:50.703568 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 19:48:50.703581 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 19:48:50.703612 | orchestrator | ++ export PATH 2025-07-12 19:48:50.703625 | orchestrator | ++ '[' -n '' ']' 2025-07-12 19:48:50.703641 | orchestrator | ++ '[' -z '' ']' 2025-07-12 19:48:50.703652 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-07-12 19:48:50.703663 | orchestrator | ++ PS1='(venv) ' 2025-07-12 19:48:50.703674 | orchestrator | ++ export PS1 2025-07-12 19:48:50.703685 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-07-12 19:48:50.703696 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-07-12 19:48:50.703707 | orchestrator | ++ hash -r 2025-07-12 19:48:50.703719 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-07-12 19:48:51.786254 | orchestrator | 2025-07-12 19:48:51.786367 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-07-12 19:48:51.786383 | orchestrator | 2025-07-12 19:48:51.786395 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 19:48:52.364111 | orchestrator | ok: [testbed-manager] 2025-07-12 19:48:52.364218 | orchestrator | 2025-07-12 19:48:52.364234 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-12 19:48:53.458885 | orchestrator | changed: [testbed-manager] 2025-07-12 19:48:53.458997 | orchestrator | 2025-07-12 19:48:53.459013 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-07-12 19:48:53.459026 | orchestrator | 2025-07-12 19:48:53.459037 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:48:55.767833 | orchestrator | ok: [testbed-manager] 2025-07-12 19:48:55.767905 | orchestrator | 2025-07-12 19:48:55.767914 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-07-12 19:48:55.812011 | orchestrator | ok: [testbed-manager] 2025-07-12 19:48:55.812111 | orchestrator | 2025-07-12 19:48:55.812129 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-07-12 19:48:56.242467 | orchestrator | changed: [testbed-manager] 2025-07-12 19:48:56.242551 | orchestrator | 2025-07-12 19:48:56.242567 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-07-12 19:48:56.279400 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:48:56.279428 | orchestrator | 2025-07-12 19:48:56.279439 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-07-12 19:48:56.602073 | orchestrator | changed: [testbed-manager] 2025-07-12 19:48:56.602155 | orchestrator | 2025-07-12 19:48:56.602171 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-07-12 19:48:56.655422 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:48:56.655497 | orchestrator | 2025-07-12 19:48:56.655511 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-07-12 19:48:56.955894 | orchestrator | ok: [testbed-manager] 2025-07-12 19:48:56.955971 | orchestrator | 2025-07-12 19:48:56.955986 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-07-12 19:48:57.065607 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:48:57.065684 | orchestrator | 2025-07-12 19:48:57.065698 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-07-12 19:48:57.065710 | orchestrator | 2025-07-12 19:48:57.065721 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:48:58.733811 | orchestrator | ok: [testbed-manager] 2025-07-12 19:48:58.733899 | orchestrator | 2025-07-12 19:48:58.733914 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-07-12 19:48:58.816124 | orchestrator | included: osism.services.traefik for testbed-manager 2025-07-12 19:48:58.816203 | orchestrator | 2025-07-12 19:48:58.816217 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-07-12 19:48:58.879057 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-07-12 19:48:58.879127 | orchestrator | 2025-07-12 19:48:58.879149 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-07-12 19:48:59.900106 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-07-12 19:48:59.900193 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-07-12 19:48:59.900211 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-07-12 19:48:59.900223 | orchestrator | 2025-07-12 19:48:59.900235 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-07-12 19:49:01.621245 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-07-12 19:49:01.621354 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-07-12 19:49:01.621379 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-07-12 19:49:01.621399 | orchestrator | 2025-07-12 19:49:01.621419 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-07-12 19:49:02.227294 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:49:02.227379 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:02.227395 | orchestrator | 2025-07-12 19:49:02.227408 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-07-12 19:49:02.814644 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:49:02.814739 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:02.814802 | orchestrator | 2025-07-12 19:49:02.814814 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-07-12 19:49:02.859857 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:49:02.859930 | orchestrator | 2025-07-12 19:49:02.859943 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-07-12 19:49:03.194696 | orchestrator | ok: [testbed-manager] 2025-07-12 19:49:03.194813 | orchestrator | 2025-07-12 19:49:03.194831 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-07-12 19:49:03.260411 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-07-12 19:49:03.260496 | orchestrator | 2025-07-12 19:49:03.260512 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-07-12 19:49:04.302731 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:04.302898 | orchestrator | 2025-07-12 19:49:04.302915 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-07-12 19:49:04.981443 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:04.981529 | orchestrator | 2025-07-12 19:49:04.981545 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-07-12 19:49:15.913402 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:15.913578 | orchestrator | 2025-07-12 19:49:15.913621 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-07-12 19:49:15.964301 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:49:15.964419 | orchestrator | 2025-07-12 19:49:15.964431 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-07-12 19:49:15.964441 | orchestrator | 2025-07-12 19:49:15.964449 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:49:17.727972 | orchestrator | ok: [testbed-manager] 2025-07-12 19:49:17.728082 | orchestrator | 2025-07-12 19:49:17.728101 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-07-12 19:49:17.849545 | orchestrator | included: osism.services.manager for testbed-manager 2025-07-12 19:49:17.849643 | orchestrator | 2025-07-12 19:49:17.849658 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-07-12 19:49:17.907777 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 19:49:17.907862 | orchestrator | 2025-07-12 19:49:17.907877 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-07-12 19:49:20.330848 | orchestrator | ok: [testbed-manager] 2025-07-12 19:49:20.330972 | orchestrator | 2025-07-12 19:49:20.330990 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-07-12 19:49:20.377795 | orchestrator | ok: [testbed-manager] 2025-07-12 19:49:20.377888 | orchestrator | 2025-07-12 19:49:20.377902 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-07-12 19:49:20.500983 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-07-12 19:49:20.501073 | orchestrator | 2025-07-12 19:49:20.501088 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-07-12 19:49:23.288943 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-07-12 19:49:23.289046 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-07-12 19:49:23.289063 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-07-12 19:49:23.289076 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-07-12 19:49:23.289088 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-07-12 19:49:23.289099 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-07-12 19:49:23.289109 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-07-12 19:49:23.289121 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-07-12 19:49:23.289132 | orchestrator | 2025-07-12 19:49:23.289149 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-07-12 19:49:23.903351 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:23.903483 | orchestrator | 2025-07-12 19:49:23.903510 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-07-12 19:49:24.555709 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:24.555867 | orchestrator | 2025-07-12 19:49:24.555884 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-07-12 19:49:24.636369 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-07-12 19:49:24.636466 | orchestrator | 2025-07-12 19:49:24.636481 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-07-12 19:49:25.858335 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-07-12 19:49:25.858404 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-07-12 19:49:25.858425 | orchestrator | 2025-07-12 19:49:25.858446 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-07-12 19:49:26.488723 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:26.488898 | orchestrator | 2025-07-12 19:49:26.488926 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-07-12 19:49:26.539438 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:49:26.539543 | orchestrator | 2025-07-12 19:49:26.539560 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-07-12 19:49:26.611632 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-07-12 19:49:26.611736 | orchestrator | 2025-07-12 19:49:26.611838 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-07-12 19:49:27.982904 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:49:27.982995 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 19:49:27.983010 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:27.983022 | orchestrator | 2025-07-12 19:49:27.983034 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-07-12 19:49:28.560365 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:28.560446 | orchestrator | 2025-07-12 19:49:28.560461 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-07-12 19:49:28.613263 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:49:28.613336 | orchestrator | 2025-07-12 19:49:28.613351 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-07-12 19:49:28.698143 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-07-12 19:49:28.698224 | orchestrator | 2025-07-12 19:49:28.698238 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-07-12 19:49:29.183377 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:29.183463 | orchestrator | 2025-07-12 19:49:29.183482 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-07-12 19:49:29.552015 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:29.552115 | orchestrator | 2025-07-12 19:49:29.552134 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-07-12 19:49:30.664072 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-07-12 19:49:30.664177 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-07-12 19:49:30.664206 | orchestrator | 2025-07-12 19:49:30.664229 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-07-12 19:49:31.248156 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:31.248307 | orchestrator | 2025-07-12 19:49:31.248331 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-07-12 19:49:31.620856 | orchestrator | ok: [testbed-manager] 2025-07-12 19:49:31.620950 | orchestrator | 2025-07-12 19:49:31.620967 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-07-12 19:49:31.978263 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:31.978365 | orchestrator | 2025-07-12 19:49:31.978382 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-07-12 19:49:32.028548 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:49:32.028679 | orchestrator | 2025-07-12 19:49:32.028697 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-07-12 19:49:32.089874 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-07-12 19:49:32.089972 | orchestrator | 2025-07-12 19:49:32.089986 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-07-12 19:49:32.130177 | orchestrator | ok: [testbed-manager] 2025-07-12 19:49:32.130282 | orchestrator | 2025-07-12 19:49:32.130299 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-07-12 19:49:34.155678 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-07-12 19:49:34.155885 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-07-12 19:49:34.155904 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-07-12 19:49:34.155916 | orchestrator | 2025-07-12 19:49:34.155929 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-07-12 19:49:34.901218 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:34.901310 | orchestrator | 2025-07-12 19:49:34.901322 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-07-12 19:49:35.627185 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:35.627317 | orchestrator | 2025-07-12 19:49:35.627347 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-07-12 19:49:36.322180 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:36.322303 | orchestrator | 2025-07-12 19:49:36.322321 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-07-12 19:49:36.397588 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-07-12 19:49:36.397687 | orchestrator | 2025-07-12 19:49:36.397701 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-07-12 19:49:36.438840 | orchestrator | ok: [testbed-manager] 2025-07-12 19:49:36.438934 | orchestrator | 2025-07-12 19:49:36.438949 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-07-12 19:49:37.129927 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-07-12 19:49:37.130101 | orchestrator | 2025-07-12 19:49:37.130120 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-07-12 19:49:37.214647 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-07-12 19:49:37.214820 | orchestrator | 2025-07-12 19:49:37.214848 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-07-12 19:49:37.959544 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:37.959651 | orchestrator | 2025-07-12 19:49:37.959669 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-07-12 19:49:38.581981 | orchestrator | ok: [testbed-manager] 2025-07-12 19:49:38.582151 | orchestrator | 2025-07-12 19:49:38.582166 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-07-12 19:49:38.638495 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:49:38.638575 | orchestrator | 2025-07-12 19:49:38.638588 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-07-12 19:49:38.699687 | orchestrator | ok: [testbed-manager] 2025-07-12 19:49:38.699812 | orchestrator | 2025-07-12 19:49:38.699828 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-07-12 19:49:39.521186 | orchestrator | changed: [testbed-manager] 2025-07-12 19:49:39.521290 | orchestrator | 2025-07-12 19:49:39.521306 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-07-12 19:50:46.048273 | orchestrator | changed: [testbed-manager] 2025-07-12 19:50:46.048375 | orchestrator | 2025-07-12 19:50:46.048394 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-07-12 19:50:46.963120 | orchestrator | ok: [testbed-manager] 2025-07-12 19:50:46.963208 | orchestrator | 2025-07-12 19:50:46.963225 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-07-12 19:50:47.016460 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:50:47.016541 | orchestrator | 2025-07-12 19:50:47.016556 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-07-12 19:50:49.781677 | orchestrator | changed: [testbed-manager] 2025-07-12 19:50:49.781763 | orchestrator | 2025-07-12 19:50:49.781779 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-07-12 19:50:49.851377 | orchestrator | ok: [testbed-manager] 2025-07-12 19:50:49.851444 | orchestrator | 2025-07-12 19:50:49.851458 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-12 19:50:49.851471 | orchestrator | 2025-07-12 19:50:49.851483 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-07-12 19:50:49.905875 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:50:49.905922 | orchestrator | 2025-07-12 19:50:49.905957 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-07-12 19:51:49.956141 | orchestrator | Pausing for 60 seconds 2025-07-12 19:51:49.956277 | orchestrator | changed: [testbed-manager] 2025-07-12 19:51:49.956299 | orchestrator | 2025-07-12 19:51:49.956318 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-07-12 19:51:54.717332 | orchestrator | changed: [testbed-manager] 2025-07-12 19:51:54.717440 | orchestrator | 2025-07-12 19:51:54.717458 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-07-12 19:52:36.512925 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-07-12 19:52:36.513046 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-07-12 19:52:36.513061 | orchestrator | changed: [testbed-manager] 2025-07-12 19:52:36.513075 | orchestrator | 2025-07-12 19:52:36.513087 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-07-12 19:52:46.846803 | orchestrator | changed: [testbed-manager] 2025-07-12 19:52:46.846923 | orchestrator | 2025-07-12 19:52:46.846959 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-07-12 19:52:46.961550 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-07-12 19:52:46.961649 | orchestrator | 2025-07-12 19:52:46.961664 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-07-12 19:52:46.961677 | orchestrator | 2025-07-12 19:52:46.961690 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-07-12 19:52:47.025420 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:52:47.025518 | orchestrator | 2025-07-12 19:52:47.025533 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:52:47.025546 | orchestrator | testbed-manager : ok=64 changed=35 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-07-12 19:52:47.025558 | orchestrator | 2025-07-12 19:52:47.146331 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-07-12 19:52:47.146435 | orchestrator | + deactivate 2025-07-12 19:52:47.146452 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-07-12 19:52:47.146466 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-07-12 19:52:47.146477 | orchestrator | + export PATH 2025-07-12 19:52:47.146493 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-07-12 19:52:47.146506 | orchestrator | + '[' -n '' ']' 2025-07-12 19:52:47.146519 | orchestrator | + hash -r 2025-07-12 19:52:47.146530 | orchestrator | + '[' -n '' ']' 2025-07-12 19:52:47.146541 | orchestrator | + unset VIRTUAL_ENV 2025-07-12 19:52:47.146552 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-07-12 19:52:47.146564 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-07-12 19:52:47.146575 | orchestrator | + unset -f deactivate 2025-07-12 19:52:47.146586 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-07-12 19:52:47.152158 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 19:52:47.152198 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-12 19:52:47.152211 | orchestrator | + local max_attempts=60 2025-07-12 19:52:47.152222 | orchestrator | + local name=ceph-ansible 2025-07-12 19:52:47.152233 | orchestrator | + local attempt_num=1 2025-07-12 19:52:47.153532 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 19:52:47.189693 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:52:47.189793 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-12 19:52:47.189802 | orchestrator | + local max_attempts=60 2025-07-12 19:52:47.189810 | orchestrator | + local name=kolla-ansible 2025-07-12 19:52:47.189816 | orchestrator | + local attempt_num=1 2025-07-12 19:52:47.189823 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-12 19:52:47.216061 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:52:47.216140 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-12 19:52:47.216147 | orchestrator | + local max_attempts=60 2025-07-12 19:52:47.216152 | orchestrator | + local name=osism-ansible 2025-07-12 19:52:47.216157 | orchestrator | + local attempt_num=1 2025-07-12 19:52:47.216574 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-12 19:52:47.240686 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 19:52:47.240754 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-12 19:52:47.240764 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-12 19:52:47.980337 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-07-12 19:52:48.222473 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-07-12 19:52:48.222578 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-07-12 19:52:48.222596 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-07-12 19:52:48.222609 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-07-12 19:52:48.222623 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-07-12 19:52:48.222634 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-07-12 19:52:48.222645 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-07-12 19:52:48.222656 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 53 seconds (healthy) 2025-07-12 19:52:48.222667 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-07-12 19:52:48.222677 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-07-12 19:52:48.222688 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-07-12 19:52:48.222699 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-07-12 19:52:48.222710 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-07-12 19:52:48.222766 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-07-12 19:52:48.222777 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-07-12 19:52:48.232768 | orchestrator | ++ semver 9.2.0 7.0.0 2025-07-12 19:52:48.297225 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 19:52:48.297310 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-07-12 19:52:48.302663 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-07-12 19:53:00.505599 | orchestrator | 2025-07-12 19:53:00 | INFO  | Task 241cce8c-01e3-49de-a7e7-f1bffea923fa (resolvconf) was prepared for execution. 2025-07-12 19:53:00.505742 | orchestrator | 2025-07-12 19:53:00 | INFO  | It takes a moment until task 241cce8c-01e3-49de-a7e7-f1bffea923fa (resolvconf) has been started and output is visible here. 2025-07-12 19:53:14.995195 | orchestrator | 2025-07-12 19:53:14.995297 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-07-12 19:53:14.995336 | orchestrator | 2025-07-12 19:53:14.995350 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:53:14.995362 | orchestrator | Saturday 12 July 2025 19:53:04 +0000 (0:00:00.160) 0:00:00.160 ********* 2025-07-12 19:53:14.995374 | orchestrator | ok: [testbed-manager] 2025-07-12 19:53:14.995386 | orchestrator | 2025-07-12 19:53:14.995398 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-12 19:53:14.995410 | orchestrator | Saturday 12 July 2025 19:53:08 +0000 (0:00:03.416) 0:00:03.576 ********* 2025-07-12 19:53:14.995422 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:53:14.995434 | orchestrator | 2025-07-12 19:53:14.995446 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-12 19:53:14.995458 | orchestrator | Saturday 12 July 2025 19:53:08 +0000 (0:00:00.073) 0:00:03.649 ********* 2025-07-12 19:53:14.995470 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-07-12 19:53:14.995482 | orchestrator | 2025-07-12 19:53:14.995494 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-12 19:53:14.995506 | orchestrator | Saturday 12 July 2025 19:53:08 +0000 (0:00:00.089) 0:00:03.738 ********* 2025-07-12 19:53:14.995517 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 19:53:14.995529 | orchestrator | 2025-07-12 19:53:14.995541 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-12 19:53:14.995553 | orchestrator | Saturday 12 July 2025 19:53:08 +0000 (0:00:00.088) 0:00:03.827 ********* 2025-07-12 19:53:14.995564 | orchestrator | ok: [testbed-manager] 2025-07-12 19:53:14.995576 | orchestrator | 2025-07-12 19:53:14.995588 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-12 19:53:14.995599 | orchestrator | Saturday 12 July 2025 19:53:09 +0000 (0:00:01.189) 0:00:05.017 ********* 2025-07-12 19:53:14.995611 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:53:14.995623 | orchestrator | 2025-07-12 19:53:14.995634 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-12 19:53:14.995646 | orchestrator | Saturday 12 July 2025 19:53:09 +0000 (0:00:00.070) 0:00:05.087 ********* 2025-07-12 19:53:14.995658 | orchestrator | ok: [testbed-manager] 2025-07-12 19:53:14.995669 | orchestrator | 2025-07-12 19:53:14.995681 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-12 19:53:14.995693 | orchestrator | Saturday 12 July 2025 19:53:10 +0000 (0:00:00.502) 0:00:05.589 ********* 2025-07-12 19:53:14.995705 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:53:14.995770 | orchestrator | 2025-07-12 19:53:14.995783 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-12 19:53:14.995796 | orchestrator | Saturday 12 July 2025 19:53:10 +0000 (0:00:00.093) 0:00:05.683 ********* 2025-07-12 19:53:14.995809 | orchestrator | changed: [testbed-manager] 2025-07-12 19:53:14.995822 | orchestrator | 2025-07-12 19:53:14.995833 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-12 19:53:14.995844 | orchestrator | Saturday 12 July 2025 19:53:10 +0000 (0:00:00.552) 0:00:06.236 ********* 2025-07-12 19:53:14.995855 | orchestrator | changed: [testbed-manager] 2025-07-12 19:53:14.995865 | orchestrator | 2025-07-12 19:53:14.995876 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-12 19:53:14.995887 | orchestrator | Saturday 12 July 2025 19:53:11 +0000 (0:00:01.002) 0:00:07.238 ********* 2025-07-12 19:53:14.995918 | orchestrator | ok: [testbed-manager] 2025-07-12 19:53:14.995929 | orchestrator | 2025-07-12 19:53:14.995940 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-12 19:53:14.995951 | orchestrator | Saturday 12 July 2025 19:53:13 +0000 (0:00:01.914) 0:00:09.152 ********* 2025-07-12 19:53:14.995962 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-07-12 19:53:14.995972 | orchestrator | 2025-07-12 19:53:14.995983 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-12 19:53:14.995994 | orchestrator | Saturday 12 July 2025 19:53:13 +0000 (0:00:00.096) 0:00:09.249 ********* 2025-07-12 19:53:14.996005 | orchestrator | changed: [testbed-manager] 2025-07-12 19:53:14.996015 | orchestrator | 2025-07-12 19:53:14.996036 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:53:14.996049 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 19:53:14.996060 | orchestrator | 2025-07-12 19:53:14.996071 | orchestrator | 2025-07-12 19:53:14.996081 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:53:14.996092 | orchestrator | Saturday 12 July 2025 19:53:14 +0000 (0:00:01.059) 0:00:10.309 ********* 2025-07-12 19:53:14.996103 | orchestrator | =============================================================================== 2025-07-12 19:53:14.996114 | orchestrator | Gathering Facts --------------------------------------------------------- 3.42s 2025-07-12 19:53:14.996124 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 1.91s 2025-07-12 19:53:14.996135 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.19s 2025-07-12 19:53:14.996146 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.06s 2025-07-12 19:53:14.996156 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.00s 2025-07-12 19:53:14.996167 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2025-07-12 19:53:14.996194 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2025-07-12 19:53:14.996206 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.10s 2025-07-12 19:53:14.996217 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2025-07-12 19:53:14.996228 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2025-07-12 19:53:14.996239 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-07-12 19:53:14.996249 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-07-12 19:53:14.996260 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.07s 2025-07-12 19:53:15.223466 | orchestrator | + osism apply sshconfig 2025-07-12 19:53:27.022521 | orchestrator | 2025-07-12 19:53:27 | INFO  | Task 4c7af232-98e2-4752-b39a-e1ca0ea44ba1 (sshconfig) was prepared for execution. 2025-07-12 19:53:27.022583 | orchestrator | 2025-07-12 19:53:27 | INFO  | It takes a moment until task 4c7af232-98e2-4752-b39a-e1ca0ea44ba1 (sshconfig) has been started and output is visible here. 2025-07-12 19:53:38.203614 | orchestrator | 2025-07-12 19:53:38.203788 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-07-12 19:53:38.203807 | orchestrator | 2025-07-12 19:53:38.203819 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-07-12 19:53:38.203831 | orchestrator | Saturday 12 July 2025 19:53:30 +0000 (0:00:00.149) 0:00:00.149 ********* 2025-07-12 19:53:38.203842 | orchestrator | ok: [testbed-manager] 2025-07-12 19:53:38.203854 | orchestrator | 2025-07-12 19:53:38.203866 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-07-12 19:53:38.203926 | orchestrator | Saturday 12 July 2025 19:53:31 +0000 (0:00:00.511) 0:00:00.660 ********* 2025-07-12 19:53:38.203955 | orchestrator | changed: [testbed-manager] 2025-07-12 19:53:38.203974 | orchestrator | 2025-07-12 19:53:38.203991 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-07-12 19:53:38.204009 | orchestrator | Saturday 12 July 2025 19:53:31 +0000 (0:00:00.484) 0:00:01.145 ********* 2025-07-12 19:53:38.204028 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-07-12 19:53:38.204046 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-07-12 19:53:38.204065 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-07-12 19:53:38.204083 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-07-12 19:53:38.204098 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-07-12 19:53:38.204109 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-07-12 19:53:38.204120 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-07-12 19:53:38.204130 | orchestrator | 2025-07-12 19:53:38.204141 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-07-12 19:53:38.204152 | orchestrator | Saturday 12 July 2025 19:53:37 +0000 (0:00:05.637) 0:00:06.783 ********* 2025-07-12 19:53:38.204165 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:53:38.204177 | orchestrator | 2025-07-12 19:53:38.204189 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-07-12 19:53:38.204202 | orchestrator | Saturday 12 July 2025 19:53:37 +0000 (0:00:00.061) 0:00:06.844 ********* 2025-07-12 19:53:38.204214 | orchestrator | changed: [testbed-manager] 2025-07-12 19:53:38.204226 | orchestrator | 2025-07-12 19:53:38.204237 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:53:38.204270 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 19:53:38.204284 | orchestrator | 2025-07-12 19:53:38.204296 | orchestrator | 2025-07-12 19:53:38.204308 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:53:38.204320 | orchestrator | Saturday 12 July 2025 19:53:37 +0000 (0:00:00.606) 0:00:07.451 ********* 2025-07-12 19:53:38.204332 | orchestrator | =============================================================================== 2025-07-12 19:53:38.204345 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.64s 2025-07-12 19:53:38.204357 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.61s 2025-07-12 19:53:38.204369 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.51s 2025-07-12 19:53:38.204381 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.48s 2025-07-12 19:53:38.204394 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2025-07-12 19:53:38.508675 | orchestrator | + osism apply known-hosts 2025-07-12 19:53:50.556126 | orchestrator | 2025-07-12 19:53:50 | INFO  | Task 3df30105-ff7d-4d53-a897-856f0531f7bc (known-hosts) was prepared for execution. 2025-07-12 19:53:50.556248 | orchestrator | 2025-07-12 19:53:50 | INFO  | It takes a moment until task 3df30105-ff7d-4d53-a897-856f0531f7bc (known-hosts) has been started and output is visible here. 2025-07-12 19:54:06.708763 | orchestrator | 2025-07-12 19:54:06.708866 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-07-12 19:54:06.708878 | orchestrator | 2025-07-12 19:54:06.708888 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-07-12 19:54:06.708898 | orchestrator | Saturday 12 July 2025 19:53:54 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-07-12 19:54:06.708906 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-12 19:54:06.708915 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-12 19:54:06.708923 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-12 19:54:06.708931 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-12 19:54:06.708957 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-12 19:54:06.708965 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-12 19:54:06.708973 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-12 19:54:06.708981 | orchestrator | 2025-07-12 19:54:06.708989 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-07-12 19:54:06.708998 | orchestrator | Saturday 12 July 2025 19:54:00 +0000 (0:00:06.031) 0:00:06.191 ********* 2025-07-12 19:54:06.709007 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-12 19:54:06.709017 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-12 19:54:06.709024 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-12 19:54:06.709032 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-12 19:54:06.709040 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-12 19:54:06.709048 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-12 19:54:06.709056 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-12 19:54:06.709063 | orchestrator | 2025-07-12 19:54:06.709071 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:06.709079 | orchestrator | Saturday 12 July 2025 19:54:00 +0000 (0:00:00.189) 0:00:06.380 ********* 2025-07-12 19:54:06.709087 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEkgL6TgyjngpPnUpnA+UG0lfzY4B+MRTjqWwaz3E6kR) 2025-07-12 19:54:06.709157 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4AXctZ8fbBb7tsGAdVonDbA9mUL7ikyUxwSTyVtyPBDr9XGfTyXWy69WfW+Ldq9aM3gFyCBG+7nNdI87aaiviGMBRm3iWvIiz9aXFhnAiL1EN7+XIZJxHXiCRc+rC0kvq3tFSHZy8OmbZQ4R83i2aELoRnRZ5GLPzW+HI+SYgkJjif/GORB4srZdE5Le9l9zHy/Tdz28a9J0EKcz96cBLK20V9S/M7fNSV8Ua165Kxxhe/hQGM20YXRbVdfUETGRadqLO33En1SRWQCeriQ8O3cUjxh7q1jHWCHoGHY5BhpY9o8Qis9+lLGIAs0vAyLinUHg7sAH/NLWLl+05P9QLOHi1w1p2ZdvODAToJsu5SvaGLyO2UWHoJr1Mb+NsC/mqPdKuzsh/zE8x2t3SSRezQ+hcDHQW1D+LxvNx7jnVzlNmvzcRHsvJIYvhe6DmrlL4dPdXATwzF1nsqCxoMDCBLM8+LlM4vnd2b3+iwSjlp5100wXdHAD63dXi8+vS6K0=) 2025-07-12 19:54:06.709169 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE0udSVSYkj7sqGEpP2IY0x44ZXBeFgFowxfjLpAaegxBRumMPp0Si3jLlZiJlVeuVgtiBG5QHwEOKCxO6wWKDI=) 2025-07-12 19:54:06.709179 | orchestrator | 2025-07-12 19:54:06.709187 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:06.709195 | orchestrator | Saturday 12 July 2025 19:54:01 +0000 (0:00:01.154) 0:00:07.535 ********* 2025-07-12 19:54:06.709203 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIZOfczbhPbUzFWtZazvNNgPxvlhEEdADmM7KPN6oPnw) 2025-07-12 19:54:06.709239 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoeYemwKH8Ga5xQr+9MDAyk85VRZUdtxjiNmaAsshNmcjHKlh/+2nBDE8EFo53Rglwed+5/D6PN8ufqn89nNBWTk/sT584BTFGNW+C2zbfoTEfEayKyAvzu/yW3oVrW5DqDTO6w9IIlmagVGlC68grWm7VIsGukei6i5gEUxoESl1naJMUeeWphzmdw8cKZexRZxm8i/Wi4u7qblb+BhuAnDQ1kNkrbBYNgZqSlHZdXismcSOCYce8WRF569OYkgimhpUUDa44sN8x9mqNdML1qHOFjZr0uuefIH2QNFBK1H+e8os+khBJaHwJbsKLo26Fy6VQMUNQ7KoaZFf0Hhz4vIvbJTmMc6sxOJ8kPTVtXdzfvGkiCJVuFFStskz+8sLJOzdIxS8gJeUtdk2Z69HQGq3Is7r/l1ActNB9yFkvNNvM7z3Cpud338PM/Ozl19ReRxA3c24Fscmr/e/K7VZoD4/Auy0V1iX/V0TQJvmViZkL6pAerrapJsTEcWsGMZ0=) 2025-07-12 19:54:06.709256 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN8xnFz4x5Epqf/ueIynXg1H7UkXM+KnrTigPEq6puNt1ZpDDUkt7Jq0xJn6zqFSkAACIohZvPG97xT0lwJ3UUo=) 2025-07-12 19:54:06.709265 | orchestrator | 2025-07-12 19:54:06.709275 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:06.709283 | orchestrator | Saturday 12 July 2025 19:54:02 +0000 (0:00:00.959) 0:00:08.495 ********* 2025-07-12 19:54:06.709293 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq1Mmh+iQolMsu3cYhFuucTaJguw5aw4xQkMZPlpitkLD84XfbxwyR5n+cAWm1+Ud+t0I0BEIcB1oCa5oUFaIgxs10Rwnjy1A3AkNJEjbi9EvLzIkTBWCl+QHT0apuT/h39Bmv5Dw6F74wtDUC9L++GT+bVjyFZaQUPuriLb2AEvibrXDndILYTyyYiLKjoLUVrdW+1fK2QvhpKrSpEwq0GMfcDI+iNO8OnQDryanq0jlpgBRumWAj+UoneSfPzaCFlV/5gsta7Ih3A+kp84UoGNzx/vY4cD7LMQtF8G4f2nL3AAXpjhF468DlmvbOtMFuBlWikUthG8RPpV3xeEepEqxMqFclfgsQHYuTcZ1Ya+DZPalY4qX1Fgt8BzPjisOlZcjndatpbcNGmI4d7dZadc9pZud8JEyGz87Uqg/ZXyWLNQQCLJPw9UciqyJfei0jFQRbrejVmvD+A1EnG5F3GGu3FKqlncR3o8XpRoV44xOaOqpyCKAlCe7Ezc5kTb0=) 2025-07-12 19:54:06.709303 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNc+HIgCnwLvz2fzZEuZ4Rb7O/se0u3kCWnKgy3xv0iilbPdsgA8CakPjtMamBcQec8GMzlJazZ3RRz7roxOlTM=) 2025-07-12 19:54:06.709312 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMk17GzA1Fcq7RkOysfDwtfNmfE+DtaXx3u2FyhzIEar) 2025-07-12 19:54:06.709321 | orchestrator | 2025-07-12 19:54:06.709330 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:06.709339 | orchestrator | Saturday 12 July 2025 19:54:03 +0000 (0:00:00.988) 0:00:09.483 ********* 2025-07-12 19:54:06.709348 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO2ApRKKktWxF4Mlz7X4dI8vlZNQmqfWuZKUOBPn4DtgBL/Q5vIgeX2eDlvtg5x0jtGZhNgaAwBLHP3riUd6K2JHzskPPt2zHn9V38TnqJQVO4akEhCV/Nf2QgDVC1YmDb6mK3oDpBWSQgz+lzYRMLUlIbAGJbmckg0GRlZ0tURikGBz/uS9aXOy64QUP8c4c09R1BA7qk9DlesJMZhWg+YaJ3+NVa49qAlA8CMt3QNHzHN9tSeOsSTGLuktXKJBA38AWWYInLwgKdolyidmpaS5TPdZJU7EZFgue6Y72Kwt4gCv+gpqLrvTGcVhSysbMp66A2JiM/kHPpGb35ZbZTA9aqN+UTRBVmNyTNdfGNqdCYAuDQ/mUmvKsohEcFxARvYw7uej1X2tX7rsj7lU/BAgAhjl2paZ+0/5eROqmr0gB86nW57j4eKNAtbeMn68abOEAXGGeNYlR92h4GNlWS0Pp6RtWqqVPxGRUlvRVbsObXDdFsGJkR/0jdDCfWUfk=) 2025-07-12 19:54:06.709357 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA4qCfPxqEq8wKa+WeZ+URJKo+kqdFP/VWyhpYfzuphtqz9+8MilNn8iuEgTMQNhRNDA+GaFIaNKSpAf8vYhjSs=) 2025-07-12 19:54:06.709366 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINAn7Ik9BkZ5NvSKZaSyZ3KeWVffLrL45GP5uwItC+lr) 2025-07-12 19:54:06.709375 | orchestrator | 2025-07-12 19:54:06.709384 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:06.709394 | orchestrator | Saturday 12 July 2025 19:54:04 +0000 (0:00:00.992) 0:00:10.476 ********* 2025-07-12 19:54:06.709403 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoJL5sdZH0g2eSye+rumN+cVf4MeOLpYf19nl3N6bNF57XhHN4aB8G6Iq7WjbpPB1zB6HXWXqR+McyaltknS/KW6iVevKB5NB9SdSnn+cHOuQyMXZ91u4xgGxcBq0HaforNsZNbg8FzlwuXjp3LZH5FK6DNl4Fm5HiPMSUH7Guw5tC9EE9eo6KWi6L5b9oeDJLtBaPwqo16kS3Xqko729PZizenMHr/jlxhsZ0GfCiNXA3LMzgS6QdBPM+IFk9N1txW/cFtHPpDY/gIW/a3oQUubzVV+QrUwWQHuEL+VFy7YmLzxhKQfX1NcA/7Dm/CasIfBnCl7OfrhXvlbkCDEe2dZFM1jvqHdrPMvODh3kL2K5dc6cgc8M8usy2eGiLofXmPxRyd4A1Skh8gTszFIQvHTJorZ9hDVZHQehXOzTjR9eYcwnkMYc7MqH0/1FEbQ3fvKNujQ7GkD9iaeTD9/vJkrOugQNahJv+YliC+Kuv09kYT0Yr7EXg2DDChjdz/18=) 2025-07-12 19:54:06.709422 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOKbOPlZs35XqH98/iuySS4mpVC1+YmOf0wD9p4cjl4I8VLpmmlT2Y7Jc9a8JmI2LGje/QhdHHgtiAceTjHGwD8=) 2025-07-12 19:54:06.709431 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGCFQ+7+weyi0/S4o4hjxcNno1DMWWQMASrsMwhLSJtB) 2025-07-12 19:54:06.709440 | orchestrator | 2025-07-12 19:54:06.709450 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:06.709459 | orchestrator | Saturday 12 July 2025 19:54:05 +0000 (0:00:01.063) 0:00:11.539 ********* 2025-07-12 19:54:06.709474 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHktvFjR9Ej04ZWoO+gGh9FEvydYodeSK12EPgSN72igNRMfB1LtJgA3gDb3+EoGqIkvJQGtfr57Xn50v4R+QybgXtNIlIkMN7MaKr7rU9XXloPwKP8iqH1kZCYFoI0hbi7OCAmMMOdmhW/hR5/S1dU2VmRylnjt9+TiCIWV9MnTlid5jXYK6ZTqI4r9IbkX18pUZcV+BgA/T6ORd+jb1a25SZQHaswxkv6W89ISRAQKEapdoXLJD7JLKbgd5v6lclQrxf2bJ+743Fljf57Ct3uwYnjdN+bO7s/ouUDqs5Io2euL8T5LLCQEFDBT64UyLXNIhCC02HlIJAgF7p/XqmzFJcrXDNjrTaSO5Hdj3KCH2BAwbMYxPbtLA/pgrtCGlj8tAC9uOi8bdH7XN40KOk3AJv4qSSbg8g/UDO9Qilf+LsLk0yTXhc/xXESd35Zja4chiM5hmVruLkX3KT1dSu6rFrBEi7gTiYzfrN6DDw4yS0O1FmefVL/AeOw+sTR4s=) 2025-07-12 19:54:17.370361 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJm+VMv1fWgw5HPnGlcygmv+KiNt3FVQ9HtTy8wfNH1H) 2025-07-12 19:54:17.370463 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBos1hhvrZqletFxn3cgIeHRCo4O/cTkBPXqksIkrJ18sKyEUsz+WXEAObKxeVlPxsB6rYtY6pvzBdCxLZW2hso=) 2025-07-12 19:54:17.370479 | orchestrator | 2025-07-12 19:54:17.370492 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:17.370505 | orchestrator | Saturday 12 July 2025 19:54:06 +0000 (0:00:01.137) 0:00:12.676 ********* 2025-07-12 19:54:17.370518 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpckbmWenc2XaCaFuBDhYqcXGGosEZC/quGQ73pD0ich4ovxNzq2Ew+MY01al76KrVNooKnoBPMUPXidG/czzhvf6ealUgFI6SzVpwmHLoZExfVAtagP2gJYBiCsXtbfypBSmo29msnclkOWOgFnUVa3d/77bqLUE6hq89DK7eUzZ7Tv20HUWlAxPJslPUvLYuxffQxCXFKtLlbAfY17TqhByciFrFAa6any1eRDZCTIuQZzjeGuRUipBXeFeZd5uL7kFLiBxQEk+XCJbdV0o/EPfGRLB6HIEHSG83SCOE9tiRqt3QYd9xPmB/c5AMGfJF7I6s6ArQOUjO+eZpCDUExS9SFR9pesH2EXCbfArP9y7+G7tx9a0Lqk2Q+EAcHLQcLtghzYUmuamSkNA720XoWGGcX2LNshvVmShTEBYHwP5Mpufy3IF1UwDixXB06hbc7eIviARA3SOYE+L2WP9JBrqs4AOLjPfd/gX0Q+iq5NoLadbYIQ54DeNxXcFENUc=) 2025-07-12 19:54:17.370532 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMdzz3+j4n3l2PxKCIUWprAj3kwKGYVHBfpic6TkMY8S4ok4WCWO0+MqiQatlrrbuZN25LIR6YFrv1sVe6TSXqc=) 2025-07-12 19:54:17.370544 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFT9u1dk/1MUUK9mavqdqLSJ7tMhNekp2yQQN5zRA3uE) 2025-07-12 19:54:17.370555 | orchestrator | 2025-07-12 19:54:17.370567 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-07-12 19:54:17.370578 | orchestrator | Saturday 12 July 2025 19:54:07 +0000 (0:00:01.095) 0:00:13.771 ********* 2025-07-12 19:54:17.370590 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-07-12 19:54:17.370601 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-07-12 19:54:17.370612 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-07-12 19:54:17.370640 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-07-12 19:54:17.370652 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-07-12 19:54:17.370663 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-07-12 19:54:17.370674 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-07-12 19:54:17.370749 | orchestrator | 2025-07-12 19:54:17.370785 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-07-12 19:54:17.370797 | orchestrator | Saturday 12 July 2025 19:54:13 +0000 (0:00:05.349) 0:00:19.120 ********* 2025-07-12 19:54:17.370809 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-07-12 19:54:17.370821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-07-12 19:54:17.370832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-07-12 19:54:17.370844 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-07-12 19:54:17.370855 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-07-12 19:54:17.370866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-07-12 19:54:17.370877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-07-12 19:54:17.370887 | orchestrator | 2025-07-12 19:54:17.370898 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:17.370909 | orchestrator | Saturday 12 July 2025 19:54:13 +0000 (0:00:00.170) 0:00:19.291 ********* 2025-07-12 19:54:17.370923 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEkgL6TgyjngpPnUpnA+UG0lfzY4B+MRTjqWwaz3E6kR) 2025-07-12 19:54:17.370974 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4AXctZ8fbBb7tsGAdVonDbA9mUL7ikyUxwSTyVtyPBDr9XGfTyXWy69WfW+Ldq9aM3gFyCBG+7nNdI87aaiviGMBRm3iWvIiz9aXFhnAiL1EN7+XIZJxHXiCRc+rC0kvq3tFSHZy8OmbZQ4R83i2aELoRnRZ5GLPzW+HI+SYgkJjif/GORB4srZdE5Le9l9zHy/Tdz28a9J0EKcz96cBLK20V9S/M7fNSV8Ua165Kxxhe/hQGM20YXRbVdfUETGRadqLO33En1SRWQCeriQ8O3cUjxh7q1jHWCHoGHY5BhpY9o8Qis9+lLGIAs0vAyLinUHg7sAH/NLWLl+05P9QLOHi1w1p2ZdvODAToJsu5SvaGLyO2UWHoJr1Mb+NsC/mqPdKuzsh/zE8x2t3SSRezQ+hcDHQW1D+LxvNx7jnVzlNmvzcRHsvJIYvhe6DmrlL4dPdXATwzF1nsqCxoMDCBLM8+LlM4vnd2b3+iwSjlp5100wXdHAD63dXi8+vS6K0=) 2025-07-12 19:54:17.370991 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBE0udSVSYkj7sqGEpP2IY0x44ZXBeFgFowxfjLpAaegxBRumMPp0Si3jLlZiJlVeuVgtiBG5QHwEOKCxO6wWKDI=) 2025-07-12 19:54:17.371003 | orchestrator | 2025-07-12 19:54:17.371016 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:17.371028 | orchestrator | Saturday 12 July 2025 19:54:14 +0000 (0:00:00.997) 0:00:20.288 ********* 2025-07-12 19:54:17.371045 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoeYemwKH8Ga5xQr+9MDAyk85VRZUdtxjiNmaAsshNmcjHKlh/+2nBDE8EFo53Rglwed+5/D6PN8ufqn89nNBWTk/sT584BTFGNW+C2zbfoTEfEayKyAvzu/yW3oVrW5DqDTO6w9IIlmagVGlC68grWm7VIsGukei6i5gEUxoESl1naJMUeeWphzmdw8cKZexRZxm8i/Wi4u7qblb+BhuAnDQ1kNkrbBYNgZqSlHZdXismcSOCYce8WRF569OYkgimhpUUDa44sN8x9mqNdML1qHOFjZr0uuefIH2QNFBK1H+e8os+khBJaHwJbsKLo26Fy6VQMUNQ7KoaZFf0Hhz4vIvbJTmMc6sxOJ8kPTVtXdzfvGkiCJVuFFStskz+8sLJOzdIxS8gJeUtdk2Z69HQGq3Is7r/l1ActNB9yFkvNNvM7z3Cpud338PM/Ozl19ReRxA3c24Fscmr/e/K7VZoD4/Auy0V1iX/V0TQJvmViZkL6pAerrapJsTEcWsGMZ0=) 2025-07-12 19:54:17.371058 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBN8xnFz4x5Epqf/ueIynXg1H7UkXM+KnrTigPEq6puNt1ZpDDUkt7Jq0xJn6zqFSkAACIohZvPG97xT0lwJ3UUo=) 2025-07-12 19:54:17.371109 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIZOfczbhPbUzFWtZazvNNgPxvlhEEdADmM7KPN6oPnw) 2025-07-12 19:54:17.371122 | orchestrator | 2025-07-12 19:54:17.371134 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:17.371146 | orchestrator | Saturday 12 July 2025 19:54:15 +0000 (0:00:00.972) 0:00:21.261 ********* 2025-07-12 19:54:17.371159 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCq1Mmh+iQolMsu3cYhFuucTaJguw5aw4xQkMZPlpitkLD84XfbxwyR5n+cAWm1+Ud+t0I0BEIcB1oCa5oUFaIgxs10Rwnjy1A3AkNJEjbi9EvLzIkTBWCl+QHT0apuT/h39Bmv5Dw6F74wtDUC9L++GT+bVjyFZaQUPuriLb2AEvibrXDndILYTyyYiLKjoLUVrdW+1fK2QvhpKrSpEwq0GMfcDI+iNO8OnQDryanq0jlpgBRumWAj+UoneSfPzaCFlV/5gsta7Ih3A+kp84UoGNzx/vY4cD7LMQtF8G4f2nL3AAXpjhF468DlmvbOtMFuBlWikUthG8RPpV3xeEepEqxMqFclfgsQHYuTcZ1Ya+DZPalY4qX1Fgt8BzPjisOlZcjndatpbcNGmI4d7dZadc9pZud8JEyGz87Uqg/ZXyWLNQQCLJPw9UciqyJfei0jFQRbrejVmvD+A1EnG5F3GGu3FKqlncR3o8XpRoV44xOaOqpyCKAlCe7Ezc5kTb0=) 2025-07-12 19:54:17.371172 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNc+HIgCnwLvz2fzZEuZ4Rb7O/se0u3kCWnKgy3xv0iilbPdsgA8CakPjtMamBcQec8GMzlJazZ3RRz7roxOlTM=) 2025-07-12 19:54:17.371184 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIMk17GzA1Fcq7RkOysfDwtfNmfE+DtaXx3u2FyhzIEar) 2025-07-12 19:54:17.371196 | orchestrator | 2025-07-12 19:54:17.371208 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:17.371220 | orchestrator | Saturday 12 July 2025 19:54:16 +0000 (0:00:01.052) 0:00:22.313 ********* 2025-07-12 19:54:17.371232 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBA4qCfPxqEq8wKa+WeZ+URJKo+kqdFP/VWyhpYfzuphtqz9+8MilNn8iuEgTMQNhRNDA+GaFIaNKSpAf8vYhjSs=) 2025-07-12 19:54:17.371245 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDO2ApRKKktWxF4Mlz7X4dI8vlZNQmqfWuZKUOBPn4DtgBL/Q5vIgeX2eDlvtg5x0jtGZhNgaAwBLHP3riUd6K2JHzskPPt2zHn9V38TnqJQVO4akEhCV/Nf2QgDVC1YmDb6mK3oDpBWSQgz+lzYRMLUlIbAGJbmckg0GRlZ0tURikGBz/uS9aXOy64QUP8c4c09R1BA7qk9DlesJMZhWg+YaJ3+NVa49qAlA8CMt3QNHzHN9tSeOsSTGLuktXKJBA38AWWYInLwgKdolyidmpaS5TPdZJU7EZFgue6Y72Kwt4gCv+gpqLrvTGcVhSysbMp66A2JiM/kHPpGb35ZbZTA9aqN+UTRBVmNyTNdfGNqdCYAuDQ/mUmvKsohEcFxARvYw7uej1X2tX7rsj7lU/BAgAhjl2paZ+0/5eROqmr0gB86nW57j4eKNAtbeMn68abOEAXGGeNYlR92h4GNlWS0Pp6RtWqqVPxGRUlvRVbsObXDdFsGJkR/0jdDCfWUfk=) 2025-07-12 19:54:17.371269 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAINAn7Ik9BkZ5NvSKZaSyZ3KeWVffLrL45GP5uwItC+lr) 2025-07-12 19:54:21.512261 | orchestrator | 2025-07-12 19:54:21.512373 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:21.512389 | orchestrator | Saturday 12 July 2025 19:54:17 +0000 (0:00:01.022) 0:00:23.336 ********* 2025-07-12 19:54:21.512402 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOKbOPlZs35XqH98/iuySS4mpVC1+YmOf0wD9p4cjl4I8VLpmmlT2Y7Jc9a8JmI2LGje/QhdHHgtiAceTjHGwD8=) 2025-07-12 19:54:21.512419 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCoJL5sdZH0g2eSye+rumN+cVf4MeOLpYf19nl3N6bNF57XhHN4aB8G6Iq7WjbpPB1zB6HXWXqR+McyaltknS/KW6iVevKB5NB9SdSnn+cHOuQyMXZ91u4xgGxcBq0HaforNsZNbg8FzlwuXjp3LZH5FK6DNl4Fm5HiPMSUH7Guw5tC9EE9eo6KWi6L5b9oeDJLtBaPwqo16kS3Xqko729PZizenMHr/jlxhsZ0GfCiNXA3LMzgS6QdBPM+IFk9N1txW/cFtHPpDY/gIW/a3oQUubzVV+QrUwWQHuEL+VFy7YmLzxhKQfX1NcA/7Dm/CasIfBnCl7OfrhXvlbkCDEe2dZFM1jvqHdrPMvODh3kL2K5dc6cgc8M8usy2eGiLofXmPxRyd4A1Skh8gTszFIQvHTJorZ9hDVZHQehXOzTjR9eYcwnkMYc7MqH0/1FEbQ3fvKNujQ7GkD9iaeTD9/vJkrOugQNahJv+YliC+Kuv09kYT0Yr7EXg2DDChjdz/18=) 2025-07-12 19:54:21.512459 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGCFQ+7+weyi0/S4o4hjxcNno1DMWWQMASrsMwhLSJtB) 2025-07-12 19:54:21.512472 | orchestrator | 2025-07-12 19:54:21.512484 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:21.512495 | orchestrator | Saturday 12 July 2025 19:54:18 +0000 (0:00:00.946) 0:00:24.283 ********* 2025-07-12 19:54:21.512506 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBos1hhvrZqletFxn3cgIeHRCo4O/cTkBPXqksIkrJ18sKyEUsz+WXEAObKxeVlPxsB6rYtY6pvzBdCxLZW2hso=) 2025-07-12 19:54:21.512533 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDHktvFjR9Ej04ZWoO+gGh9FEvydYodeSK12EPgSN72igNRMfB1LtJgA3gDb3+EoGqIkvJQGtfr57Xn50v4R+QybgXtNIlIkMN7MaKr7rU9XXloPwKP8iqH1kZCYFoI0hbi7OCAmMMOdmhW/hR5/S1dU2VmRylnjt9+TiCIWV9MnTlid5jXYK6ZTqI4r9IbkX18pUZcV+BgA/T6ORd+jb1a25SZQHaswxkv6W89ISRAQKEapdoXLJD7JLKbgd5v6lclQrxf2bJ+743Fljf57Ct3uwYnjdN+bO7s/ouUDqs5Io2euL8T5LLCQEFDBT64UyLXNIhCC02HlIJAgF7p/XqmzFJcrXDNjrTaSO5Hdj3KCH2BAwbMYxPbtLA/pgrtCGlj8tAC9uOi8bdH7XN40KOk3AJv4qSSbg8g/UDO9Qilf+LsLk0yTXhc/xXESd35Zja4chiM5hmVruLkX3KT1dSu6rFrBEi7gTiYzfrN6DDw4yS0O1FmefVL/AeOw+sTR4s=) 2025-07-12 19:54:21.512546 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJm+VMv1fWgw5HPnGlcygmv+KiNt3FVQ9HtTy8wfNH1H) 2025-07-12 19:54:21.512557 | orchestrator | 2025-07-12 19:54:21.512568 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-07-12 19:54:21.512578 | orchestrator | Saturday 12 July 2025 19:54:19 +0000 (0:00:01.047) 0:00:25.330 ********* 2025-07-12 19:54:21.512590 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpckbmWenc2XaCaFuBDhYqcXGGosEZC/quGQ73pD0ich4ovxNzq2Ew+MY01al76KrVNooKnoBPMUPXidG/czzhvf6ealUgFI6SzVpwmHLoZExfVAtagP2gJYBiCsXtbfypBSmo29msnclkOWOgFnUVa3d/77bqLUE6hq89DK7eUzZ7Tv20HUWlAxPJslPUvLYuxffQxCXFKtLlbAfY17TqhByciFrFAa6any1eRDZCTIuQZzjeGuRUipBXeFeZd5uL7kFLiBxQEk+XCJbdV0o/EPfGRLB6HIEHSG83SCOE9tiRqt3QYd9xPmB/c5AMGfJF7I6s6ArQOUjO+eZpCDUExS9SFR9pesH2EXCbfArP9y7+G7tx9a0Lqk2Q+EAcHLQcLtghzYUmuamSkNA720XoWGGcX2LNshvVmShTEBYHwP5Mpufy3IF1UwDixXB06hbc7eIviARA3SOYE+L2WP9JBrqs4AOLjPfd/gX0Q+iq5NoLadbYIQ54DeNxXcFENUc=) 2025-07-12 19:54:21.512601 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMdzz3+j4n3l2PxKCIUWprAj3kwKGYVHBfpic6TkMY8S4ok4WCWO0+MqiQatlrrbuZN25LIR6YFrv1sVe6TSXqc=) 2025-07-12 19:54:21.512613 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFT9u1dk/1MUUK9mavqdqLSJ7tMhNekp2yQQN5zRA3uE) 2025-07-12 19:54:21.512624 | orchestrator | 2025-07-12 19:54:21.512635 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-07-12 19:54:21.512645 | orchestrator | Saturday 12 July 2025 19:54:20 +0000 (0:00:01.018) 0:00:26.349 ********* 2025-07-12 19:54:21.512656 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-12 19:54:21.512668 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 19:54:21.512678 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-12 19:54:21.512750 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-12 19:54:21.512762 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-12 19:54:21.512773 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-12 19:54:21.512784 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-12 19:54:21.512796 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:54:21.512808 | orchestrator | 2025-07-12 19:54:21.512839 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-07-12 19:54:21.512852 | orchestrator | Saturday 12 July 2025 19:54:20 +0000 (0:00:00.202) 0:00:26.552 ********* 2025-07-12 19:54:21.512865 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:54:21.512885 | orchestrator | 2025-07-12 19:54:21.512898 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-07-12 19:54:21.512910 | orchestrator | Saturday 12 July 2025 19:54:20 +0000 (0:00:00.061) 0:00:26.613 ********* 2025-07-12 19:54:21.512922 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:54:21.512934 | orchestrator | 2025-07-12 19:54:21.512946 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-07-12 19:54:21.512958 | orchestrator | Saturday 12 July 2025 19:54:20 +0000 (0:00:00.052) 0:00:26.666 ********* 2025-07-12 19:54:21.512970 | orchestrator | changed: [testbed-manager] 2025-07-12 19:54:21.512982 | orchestrator | 2025-07-12 19:54:21.512994 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:54:21.513007 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 19:54:21.513020 | orchestrator | 2025-07-12 19:54:21.513031 | orchestrator | 2025-07-12 19:54:21.513043 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:54:21.513056 | orchestrator | Saturday 12 July 2025 19:54:21 +0000 (0:00:00.549) 0:00:27.215 ********* 2025-07-12 19:54:21.513067 | orchestrator | =============================================================================== 2025-07-12 19:54:21.513078 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.03s 2025-07-12 19:54:21.513089 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.35s 2025-07-12 19:54:21.513100 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-07-12 19:54:21.513111 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.14s 2025-07-12 19:54:21.513122 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2025-07-12 19:54:21.513132 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-07-12 19:54:21.513143 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-12 19:54:21.513154 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-07-12 19:54:21.513165 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-07-12 19:54:21.513176 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-07-12 19:54:21.513187 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-07-12 19:54:21.513198 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-07-12 19:54:21.513209 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2025-07-12 19:54:21.513219 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-07-12 19:54:21.513230 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.96s 2025-07-12 19:54:21.513241 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.95s 2025-07-12 19:54:21.513251 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.55s 2025-07-12 19:54:21.513262 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.20s 2025-07-12 19:54:21.513273 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.19s 2025-07-12 19:54:21.513284 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-07-12 19:54:21.804129 | orchestrator | + osism apply squid 2025-07-12 19:54:33.829230 | orchestrator | 2025-07-12 19:54:33 | INFO  | Task 5258e019-5bdd-4c02-a535-2a49eae6431d (squid) was prepared for execution. 2025-07-12 19:54:33.829348 | orchestrator | 2025-07-12 19:54:33 | INFO  | It takes a moment until task 5258e019-5bdd-4c02-a535-2a49eae6431d (squid) has been started and output is visible here. 2025-07-12 19:56:28.659655 | orchestrator | 2025-07-12 19:56:28.659865 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-07-12 19:56:28.659937 | orchestrator | 2025-07-12 19:56:28.659961 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-07-12 19:56:28.659980 | orchestrator | Saturday 12 July 2025 19:54:37 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-07-12 19:56:28.660000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 19:56:28.660021 | orchestrator | 2025-07-12 19:56:28.660039 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-07-12 19:56:28.660059 | orchestrator | Saturday 12 July 2025 19:54:37 +0000 (0:00:00.071) 0:00:00.231 ********* 2025-07-12 19:56:28.660078 | orchestrator | ok: [testbed-manager] 2025-07-12 19:56:28.660096 | orchestrator | 2025-07-12 19:56:28.660116 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-07-12 19:56:28.660134 | orchestrator | Saturday 12 July 2025 19:54:39 +0000 (0:00:01.348) 0:00:01.580 ********* 2025-07-12 19:56:28.660153 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-07-12 19:56:28.660174 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-07-12 19:56:28.660195 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-07-12 19:56:28.660215 | orchestrator | 2025-07-12 19:56:28.660235 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-07-12 19:56:28.660253 | orchestrator | Saturday 12 July 2025 19:54:40 +0000 (0:00:01.262) 0:00:02.842 ********* 2025-07-12 19:56:28.660274 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-07-12 19:56:28.660295 | orchestrator | 2025-07-12 19:56:28.660315 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-07-12 19:56:28.660333 | orchestrator | Saturday 12 July 2025 19:54:41 +0000 (0:00:01.166) 0:00:04.008 ********* 2025-07-12 19:56:28.660352 | orchestrator | ok: [testbed-manager] 2025-07-12 19:56:28.660408 | orchestrator | 2025-07-12 19:56:28.660451 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-07-12 19:56:28.660472 | orchestrator | Saturday 12 July 2025 19:54:41 +0000 (0:00:00.371) 0:00:04.380 ********* 2025-07-12 19:56:28.660493 | orchestrator | changed: [testbed-manager] 2025-07-12 19:56:28.660515 | orchestrator | 2025-07-12 19:56:28.660534 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-07-12 19:56:28.660552 | orchestrator | Saturday 12 July 2025 19:54:42 +0000 (0:00:00.953) 0:00:05.333 ********* 2025-07-12 19:56:28.660570 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-07-12 19:56:28.660590 | orchestrator | ok: [testbed-manager] 2025-07-12 19:56:28.660610 | orchestrator | 2025-07-12 19:56:28.660629 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-07-12 19:56:28.660655 | orchestrator | Saturday 12 July 2025 19:55:15 +0000 (0:00:32.241) 0:00:37.574 ********* 2025-07-12 19:56:28.660674 | orchestrator | changed: [testbed-manager] 2025-07-12 19:56:28.660694 | orchestrator | 2025-07-12 19:56:28.660712 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-07-12 19:56:28.660731 | orchestrator | Saturday 12 July 2025 19:55:27 +0000 (0:00:12.503) 0:00:50.078 ********* 2025-07-12 19:56:28.660864 | orchestrator | Pausing for 60 seconds 2025-07-12 19:56:28.660885 | orchestrator | changed: [testbed-manager] 2025-07-12 19:56:28.660902 | orchestrator | 2025-07-12 19:56:28.660921 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-07-12 19:56:28.660940 | orchestrator | Saturday 12 July 2025 19:56:27 +0000 (0:01:00.064) 0:01:50.143 ********* 2025-07-12 19:56:28.660957 | orchestrator | ok: [testbed-manager] 2025-07-12 19:56:28.660976 | orchestrator | 2025-07-12 19:56:28.660994 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-07-12 19:56:28.661012 | orchestrator | Saturday 12 July 2025 19:56:27 +0000 (0:00:00.077) 0:01:50.220 ********* 2025-07-12 19:56:28.661030 | orchestrator | changed: [testbed-manager] 2025-07-12 19:56:28.661048 | orchestrator | 2025-07-12 19:56:28.661079 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:56:28.661097 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:56:28.661116 | orchestrator | 2025-07-12 19:56:28.661135 | orchestrator | 2025-07-12 19:56:28.661153 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:56:28.661170 | orchestrator | Saturday 12 July 2025 19:56:28 +0000 (0:00:00.651) 0:01:50.872 ********* 2025-07-12 19:56:28.661190 | orchestrator | =============================================================================== 2025-07-12 19:56:28.661206 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.06s 2025-07-12 19:56:28.661221 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 32.24s 2025-07-12 19:56:28.661239 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.50s 2025-07-12 19:56:28.661255 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.35s 2025-07-12 19:56:28.661271 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.26s 2025-07-12 19:56:28.661286 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.17s 2025-07-12 19:56:28.661303 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-07-12 19:56:28.661318 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.65s 2025-07-12 19:56:28.661334 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-07-12 19:56:28.661349 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.08s 2025-07-12 19:56:28.661366 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.07s 2025-07-12 19:56:28.982332 | orchestrator | + [[ 9.2.0 != \l\a\t\e\s\t ]] 2025-07-12 19:56:28.982435 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-07-12 19:56:28.987971 | orchestrator | ++ semver 9.2.0 9.0.0 2025-07-12 19:56:29.065895 | orchestrator | + [[ 1 -lt 0 ]] 2025-07-12 19:56:29.066818 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-07-12 19:56:41.096406 | orchestrator | 2025-07-12 19:56:41 | INFO  | Task f8bd5f19-8b3a-4226-84a9-e0b1f7dca4b3 (operator) was prepared for execution. 2025-07-12 19:56:41.096535 | orchestrator | 2025-07-12 19:56:41 | INFO  | It takes a moment until task f8bd5f19-8b3a-4226-84a9-e0b1f7dca4b3 (operator) has been started and output is visible here. 2025-07-12 19:56:58.206868 | orchestrator | 2025-07-12 19:56:58.206967 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-07-12 19:56:58.206984 | orchestrator | 2025-07-12 19:56:58.206996 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 19:56:58.207008 | orchestrator | Saturday 12 July 2025 19:56:45 +0000 (0:00:00.137) 0:00:00.137 ********* 2025-07-12 19:56:58.207018 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:56:58.207030 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:56:58.207041 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:56:58.207051 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:56:58.207062 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:56:58.207072 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:56:58.207083 | orchestrator | 2025-07-12 19:56:58.207093 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-07-12 19:56:58.207104 | orchestrator | Saturday 12 July 2025 19:56:48 +0000 (0:00:03.666) 0:00:03.804 ********* 2025-07-12 19:56:58.207115 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:56:58.207126 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:56:58.207136 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:56:58.207147 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:56:58.207157 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:56:58.207168 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:56:58.207178 | orchestrator | 2025-07-12 19:56:58.207189 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-07-12 19:56:58.207216 | orchestrator | 2025-07-12 19:56:58.207228 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-07-12 19:56:58.207238 | orchestrator | Saturday 12 July 2025 19:56:49 +0000 (0:00:00.728) 0:00:04.532 ********* 2025-07-12 19:56:58.207249 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:56:58.207259 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:56:58.207270 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:56:58.207280 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:56:58.207291 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:56:58.207301 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:56:58.207311 | orchestrator | 2025-07-12 19:56:58.207322 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-07-12 19:56:58.207332 | orchestrator | Saturday 12 July 2025 19:56:49 +0000 (0:00:00.159) 0:00:04.691 ********* 2025-07-12 19:56:58.207343 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:56:58.207353 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:56:58.207364 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:56:58.207374 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:56:58.207385 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:56:58.207395 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:56:58.207405 | orchestrator | 2025-07-12 19:56:58.207416 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-07-12 19:56:58.207427 | orchestrator | Saturday 12 July 2025 19:56:49 +0000 (0:00:00.156) 0:00:04.848 ********* 2025-07-12 19:56:58.207437 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:56:58.207448 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:56:58.207458 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:56:58.207469 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:56:58.207479 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:56:58.207489 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:56:58.207500 | orchestrator | 2025-07-12 19:56:58.207510 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-07-12 19:56:58.207521 | orchestrator | Saturday 12 July 2025 19:56:50 +0000 (0:00:00.552) 0:00:05.401 ********* 2025-07-12 19:56:58.207531 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:56:58.207542 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:56:58.207552 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:56:58.207562 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:56:58.207573 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:56:58.207583 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:56:58.207593 | orchestrator | 2025-07-12 19:56:58.207604 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-07-12 19:56:58.207615 | orchestrator | Saturday 12 July 2025 19:56:51 +0000 (0:00:00.647) 0:00:06.048 ********* 2025-07-12 19:56:58.207625 | orchestrator | ok: [testbed-node-2] => (item=adm) 2025-07-12 19:56:58.207637 | orchestrator | ok: [testbed-node-0] => (item=adm) 2025-07-12 19:56:58.207647 | orchestrator | ok: [testbed-node-3] => (item=adm) 2025-07-12 19:56:58.207658 | orchestrator | ok: [testbed-node-4] => (item=adm) 2025-07-12 19:56:58.207668 | orchestrator | ok: [testbed-node-5] => (item=adm) 2025-07-12 19:56:58.207678 | orchestrator | ok: [testbed-node-2] => (item=sudo) 2025-07-12 19:56:58.207688 | orchestrator | ok: [testbed-node-0] => (item=sudo) 2025-07-12 19:56:58.207699 | orchestrator | ok: [testbed-node-4] => (item=sudo) 2025-07-12 19:56:58.207709 | orchestrator | ok: [testbed-node-3] => (item=sudo) 2025-07-12 19:56:58.207720 | orchestrator | ok: [testbed-node-5] => (item=sudo) 2025-07-12 19:56:58.207730 | orchestrator | ok: [testbed-node-1] => (item=adm) 2025-07-12 19:56:58.207740 | orchestrator | ok: [testbed-node-1] => (item=sudo) 2025-07-12 19:56:58.207751 | orchestrator | 2025-07-12 19:56:58.207761 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-07-12 19:56:58.207772 | orchestrator | Saturday 12 July 2025 19:56:53 +0000 (0:00:02.013) 0:00:08.062 ********* 2025-07-12 19:56:58.207799 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:56:58.207811 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:56:58.207821 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:56:58.207839 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:56:58.207849 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:56:58.207865 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:56:58.207876 | orchestrator | 2025-07-12 19:56:58.207887 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-07-12 19:56:58.207899 | orchestrator | Saturday 12 July 2025 19:56:54 +0000 (0:00:01.188) 0:00:09.250 ********* 2025-07-12 19:56:58.207909 | orchestrator | ok: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:56:58.207921 | orchestrator | ok: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:56:58.207932 | orchestrator | ok: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:56:58.207942 | orchestrator | ok: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:56:58.207953 | orchestrator | ok: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:56:58.207969 | orchestrator | ok: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-07-12 19:56:58.207989 | orchestrator | ok: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-07-12 19:56:58.208086 | orchestrator | ok: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-07-12 19:56:58.208125 | orchestrator | ok: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-07-12 19:56:58.208151 | orchestrator | ok: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-07-12 19:56:58.208162 | orchestrator | ok: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-07-12 19:56:58.208173 | orchestrator | ok: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-07-12 19:56:58.208184 | orchestrator | ok: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:56:58.208195 | orchestrator | ok: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:56:58.208206 | orchestrator | ok: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:56:58.208217 | orchestrator | ok: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:56:58.208227 | orchestrator | ok: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:56:58.208238 | orchestrator | ok: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-07-12 19:56:58.208249 | orchestrator | 2025-07-12 19:56:58.208260 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-07-12 19:56:58.208271 | orchestrator | Saturday 12 July 2025 19:56:55 +0000 (0:00:01.284) 0:00:10.534 ********* 2025-07-12 19:56:58.208282 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:56:58.208292 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:56:58.208303 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:56:58.208314 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:56:58.208324 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:56:58.208335 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:56:58.208346 | orchestrator | 2025-07-12 19:56:58.208357 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-07-12 19:56:58.208368 | orchestrator | Saturday 12 July 2025 19:56:55 +0000 (0:00:00.135) 0:00:10.669 ********* 2025-07-12 19:56:58.208378 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:56:58.208389 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:56:58.208400 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:56:58.208410 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:56:58.208421 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:56:58.208432 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:56:58.208442 | orchestrator | 2025-07-12 19:56:58.208453 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-07-12 19:56:58.208468 | orchestrator | Saturday 12 July 2025 19:56:56 +0000 (0:00:00.517) 0:00:11.187 ********* 2025-07-12 19:56:58.208479 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:56:58.208490 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:56:58.208501 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:56:58.208511 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:56:58.208522 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:56:58.208533 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:56:58.208552 | orchestrator | 2025-07-12 19:56:58.208563 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-07-12 19:56:58.208574 | orchestrator | Saturday 12 July 2025 19:56:56 +0000 (0:00:00.186) 0:00:11.374 ********* 2025-07-12 19:56:58.208584 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 19:56:58.208595 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 19:56:58.208640 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-12 19:56:58.208652 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:56:58.208663 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:56:58.208674 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:56:58.208684 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-12 19:56:58.208695 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:56:58.208706 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 19:56:58.208716 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:56:58.208727 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 19:56:58.208738 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:56:58.208749 | orchestrator | 2025-07-12 19:56:58.208760 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-07-12 19:56:58.208770 | orchestrator | Saturday 12 July 2025 19:56:57 +0000 (0:00:00.805) 0:00:12.179 ********* 2025-07-12 19:56:58.208815 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:56:58.208833 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:56:58.208851 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:56:58.208869 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:56:58.208887 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:56:58.208899 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:56:58.208909 | orchestrator | 2025-07-12 19:56:58.208920 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-07-12 19:56:58.208931 | orchestrator | Saturday 12 July 2025 19:56:57 +0000 (0:00:00.146) 0:00:12.326 ********* 2025-07-12 19:56:58.208942 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:56:58.208952 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:56:58.208963 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:56:58.208974 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:56:58.208984 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:56:58.208995 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:56:58.209005 | orchestrator | 2025-07-12 19:56:58.209016 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-07-12 19:56:58.209027 | orchestrator | Saturday 12 July 2025 19:56:57 +0000 (0:00:00.131) 0:00:12.457 ********* 2025-07-12 19:56:58.209037 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:56:58.209048 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:56:58.209059 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:56:58.209069 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:56:58.209080 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:56:58.209090 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:56:58.209101 | orchestrator | 2025-07-12 19:56:58.209111 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-07-12 19:56:58.209122 | orchestrator | Saturday 12 July 2025 19:56:57 +0000 (0:00:00.139) 0:00:12.596 ********* 2025-07-12 19:56:58.209133 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:56:58.209143 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:56:58.209154 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:56:58.209164 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:56:58.209175 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:56:58.209185 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:56:58.209196 | orchestrator | 2025-07-12 19:56:58.209207 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-07-12 19:56:58.209226 | orchestrator | Saturday 12 July 2025 19:56:58 +0000 (0:00:00.634) 0:00:13.230 ********* 2025-07-12 19:56:58.582949 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:56:58.583030 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:56:58.583063 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:56:58.583073 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:56:58.583083 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:56:58.583092 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:56:58.583102 | orchestrator | 2025-07-12 19:56:58.583112 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:56:58.583123 | orchestrator | testbed-node-0 : ok=12  changed=2  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:56:58.583133 | orchestrator | testbed-node-1 : ok=12  changed=2  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:56:58.583143 | orchestrator | testbed-node-2 : ok=12  changed=2  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:56:58.583153 | orchestrator | testbed-node-3 : ok=12  changed=2  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:56:58.583162 | orchestrator | testbed-node-4 : ok=12  changed=2  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:56:58.583172 | orchestrator | testbed-node-5 : ok=12  changed=2  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 19:56:58.583181 | orchestrator | 2025-07-12 19:56:58.583191 | orchestrator | 2025-07-12 19:56:58.583200 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:56:58.583210 | orchestrator | Saturday 12 July 2025 19:56:58 +0000 (0:00:00.204) 0:00:13.435 ********* 2025-07-12 19:56:58.583219 | orchestrator | =============================================================================== 2025-07-12 19:56:58.583229 | orchestrator | Gathering Facts --------------------------------------------------------- 3.67s 2025-07-12 19:56:58.583238 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 2.01s 2025-07-12 19:56:58.583248 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2025-07-12 19:56:58.583258 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.19s 2025-07-12 19:56:58.583268 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.81s 2025-07-12 19:56:58.583277 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2025-07-12 19:56:58.583286 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.65s 2025-07-12 19:56:58.583296 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.63s 2025-07-12 19:56:58.583305 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.55s 2025-07-12 19:56:58.583315 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.52s 2025-07-12 19:56:58.583325 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.20s 2025-07-12 19:56:58.583334 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-07-12 19:56:58.583344 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-07-12 19:56:58.583353 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-07-12 19:56:58.583362 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2025-07-12 19:56:58.583372 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-07-12 19:56:58.583381 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2025-07-12 19:56:58.583391 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2025-07-12 19:56:58.780223 | orchestrator | + osism apply --environment custom facts 2025-07-12 19:57:00.404772 | orchestrator | 2025-07-12 19:57:00 | INFO  | Trying to run play facts in environment custom 2025-07-12 19:57:10.554507 | orchestrator | 2025-07-12 19:57:10 | INFO  | Task 8c79fe66-de53-4740-9194-1ac5037517fe (facts) was prepared for execution. 2025-07-12 19:57:10.554653 | orchestrator | 2025-07-12 19:57:10 | INFO  | It takes a moment until task 8c79fe66-de53-4740-9194-1ac5037517fe (facts) has been started and output is visible here. 2025-07-12 19:57:56.109822 | orchestrator | 2025-07-12 19:57:56.109952 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-07-12 19:57:56.109963 | orchestrator | 2025-07-12 19:57:56.109969 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 19:57:56.109975 | orchestrator | Saturday 12 July 2025 19:57:14 +0000 (0:00:00.080) 0:00:00.080 ********* 2025-07-12 19:57:56.109980 | orchestrator | ok: [testbed-manager] 2025-07-12 19:57:56.109987 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:56.109993 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:56.109998 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:56.110004 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:56.110009 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:56.110014 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:56.110062 | orchestrator | 2025-07-12 19:57:56.110067 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-07-12 19:57:56.110072 | orchestrator | Saturday 12 July 2025 19:57:15 +0000 (0:00:01.317) 0:00:01.398 ********* 2025-07-12 19:57:56.110092 | orchestrator | ok: [testbed-manager] 2025-07-12 19:57:56.110099 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:56.110105 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:57:56.110111 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:57:56.110117 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:56.110123 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:56.110129 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:57:56.110135 | orchestrator | 2025-07-12 19:57:56.110141 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-07-12 19:57:56.110147 | orchestrator | 2025-07-12 19:57:56.110153 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 19:57:56.110159 | orchestrator | Saturday 12 July 2025 19:57:16 +0000 (0:00:01.161) 0:00:02.559 ********* 2025-07-12 19:57:56.110191 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:57:56.110198 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:57:56.110204 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:57:56.110210 | orchestrator | 2025-07-12 19:57:56.110216 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 19:57:56.110223 | orchestrator | Saturday 12 July 2025 19:57:16 +0000 (0:00:00.081) 0:00:02.640 ********* 2025-07-12 19:57:56.110229 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:57:56.110235 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:57:56.110240 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:57:56.110246 | orchestrator | 2025-07-12 19:57:56.110252 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 19:57:56.110258 | orchestrator | Saturday 12 July 2025 19:57:16 +0000 (0:00:00.188) 0:00:02.828 ********* 2025-07-12 19:57:56.110264 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:57:56.110270 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:57:56.110275 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:57:56.110281 | orchestrator | 2025-07-12 19:57:56.110288 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 19:57:56.110294 | orchestrator | Saturday 12 July 2025 19:57:17 +0000 (0:00:00.168) 0:00:02.996 ********* 2025-07-12 19:57:56.110304 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:57:56.110312 | orchestrator | 2025-07-12 19:57:56.110318 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 19:57:56.110324 | orchestrator | Saturday 12 July 2025 19:57:17 +0000 (0:00:00.135) 0:00:03.132 ********* 2025-07-12 19:57:56.110346 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:57:56.110352 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:57:56.110358 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:57:56.110364 | orchestrator | 2025-07-12 19:57:56.110370 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 19:57:56.110375 | orchestrator | Saturday 12 July 2025 19:57:17 +0000 (0:00:00.465) 0:00:03.598 ********* 2025-07-12 19:57:56.110382 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:57:56.110388 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:57:56.110395 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:57:56.110401 | orchestrator | 2025-07-12 19:57:56.110408 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 19:57:56.110414 | orchestrator | Saturday 12 July 2025 19:57:17 +0000 (0:00:00.113) 0:00:03.711 ********* 2025-07-12 19:57:56.110421 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:56.110427 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:56.110434 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:56.110440 | orchestrator | 2025-07-12 19:57:56.110447 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 19:57:56.110453 | orchestrator | Saturday 12 July 2025 19:57:18 +0000 (0:00:01.002) 0:00:04.714 ********* 2025-07-12 19:57:56.110460 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:57:56.110466 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:57:56.110472 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:57:56.110479 | orchestrator | 2025-07-12 19:57:56.110486 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 19:57:56.110493 | orchestrator | Saturday 12 July 2025 19:57:19 +0000 (0:00:00.471) 0:00:05.185 ********* 2025-07-12 19:57:56.110499 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:56.110505 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:56.110512 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:56.110519 | orchestrator | 2025-07-12 19:57:56.110525 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 19:57:56.110532 | orchestrator | Saturday 12 July 2025 19:57:20 +0000 (0:00:01.005) 0:00:06.191 ********* 2025-07-12 19:57:56.110539 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:56.110546 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:56.110552 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:56.110558 | orchestrator | 2025-07-12 19:57:56.110565 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-07-12 19:57:56.110571 | orchestrator | Saturday 12 July 2025 19:57:35 +0000 (0:00:14.806) 0:00:20.997 ********* 2025-07-12 19:57:56.110578 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:57:56.110584 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:57:56.110590 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:57:56.110597 | orchestrator | 2025-07-12 19:57:56.110603 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-07-12 19:57:56.110626 | orchestrator | Saturday 12 July 2025 19:57:35 +0000 (0:00:00.081) 0:00:21.078 ********* 2025-07-12 19:57:56.110633 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:57:56.110640 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:57:56.110649 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:57:56.110658 | orchestrator | 2025-07-12 19:57:56.110667 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-07-12 19:57:56.110677 | orchestrator | Saturday 12 July 2025 19:57:45 +0000 (0:00:10.800) 0:00:31.879 ********* 2025-07-12 19:57:56.110686 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:57:56.110696 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:57:56.110706 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:57:56.110713 | orchestrator | 2025-07-12 19:57:56.110719 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-07-12 19:57:56.110725 | orchestrator | Saturday 12 July 2025 19:57:46 +0000 (0:00:00.428) 0:00:32.308 ********* 2025-07-12 19:57:56.110732 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-07-12 19:57:56.110745 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-07-12 19:57:56.110751 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-07-12 19:57:56.110758 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-07-12 19:57:56.110765 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-07-12 19:57:56.110771 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-07-12 19:57:56.110777 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-07-12 19:57:56.110784 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-07-12 19:57:56.110790 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-07-12 19:57:56.110796 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-07-12 19:57:56.110802 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-07-12 19:57:56.110808 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-07-12 19:57:56.110813 | orchestrator | 2025-07-12 19:57:56.110819 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 19:57:56.110825 | orchestrator | Saturday 12 July 2025 19:57:49 +0000 (0:00:03.531) 0:00:35.839 ********* 2025-07-12 19:57:56.110830 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:57:56.110836 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:57:56.110842 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:57:56.110847 | orchestrator | 2025-07-12 19:57:56.110853 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 19:57:56.110859 | orchestrator | 2025-07-12 19:57:56.110922 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 19:57:56.110933 | orchestrator | Saturday 12 July 2025 19:57:51 +0000 (0:00:01.237) 0:00:37.077 ********* 2025-07-12 19:57:56.110947 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:57:56.110957 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:57:56.110965 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:57:56.110974 | orchestrator | ok: [testbed-manager] 2025-07-12 19:57:56.110983 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:57:56.110992 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:57:56.111001 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:57:56.111009 | orchestrator | 2025-07-12 19:57:56.111018 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 19:57:56.111028 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:57:56.111037 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:57:56.111048 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:57:56.111057 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 19:57:56.111067 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:57:56.111077 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:57:56.111087 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 19:57:56.111096 | orchestrator | 2025-07-12 19:57:56.111106 | orchestrator | 2025-07-12 19:57:56.111116 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 19:57:56.111124 | orchestrator | Saturday 12 July 2025 19:57:56 +0000 (0:00:04.941) 0:00:42.018 ********* 2025-07-12 19:57:56.111143 | orchestrator | =============================================================================== 2025-07-12 19:57:56.111153 | orchestrator | osism.commons.repository : Update package cache ------------------------ 14.81s 2025-07-12 19:57:56.111164 | orchestrator | Install required packages (Debian) ------------------------------------- 10.80s 2025-07-12 19:57:56.111173 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.94s 2025-07-12 19:57:56.111183 | orchestrator | Copy fact files --------------------------------------------------------- 3.53s 2025-07-12 19:57:56.111193 | orchestrator | Create custom facts directory ------------------------------------------- 1.32s 2025-07-12 19:57:56.111202 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.24s 2025-07-12 19:57:56.111221 | orchestrator | Copy fact file ---------------------------------------------------------- 1.16s 2025-07-12 19:57:56.436252 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.01s 2025-07-12 19:57:56.436358 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.00s 2025-07-12 19:57:56.436373 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.47s 2025-07-12 19:57:56.436382 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.47s 2025-07-12 19:57:56.436394 | orchestrator | Create custom facts directory ------------------------------------------- 0.43s 2025-07-12 19:57:56.436403 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-07-12 19:57:56.436413 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.17s 2025-07-12 19:57:56.436423 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.14s 2025-07-12 19:57:56.436435 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-07-12 19:57:56.436445 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.08s 2025-07-12 19:57:56.436474 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2025-07-12 19:57:56.782975 | orchestrator | + osism apply bootstrap 2025-07-12 19:58:09.025200 | orchestrator | 2025-07-12 19:58:09 | INFO  | Task 185eb99b-d37d-453a-820e-42984f79fe7a (bootstrap) was prepared for execution. 2025-07-12 19:58:09.025270 | orchestrator | 2025-07-12 19:58:09 | INFO  | It takes a moment until task 185eb99b-d37d-453a-820e-42984f79fe7a (bootstrap) has been started and output is visible here. 2025-07-12 19:58:24.461149 | orchestrator | 2025-07-12 19:58:24.461206 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-07-12 19:58:24.461212 | orchestrator | 2025-07-12 19:58:24.461217 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-07-12 19:58:24.461221 | orchestrator | Saturday 12 July 2025 19:58:12 +0000 (0:00:00.172) 0:00:00.173 ********* 2025-07-12 19:58:24.461226 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:24.461231 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:24.461236 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:24.461241 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:24.461245 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:24.461250 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:24.461265 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:24.461269 | orchestrator | 2025-07-12 19:58:24.461274 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 19:58:24.461279 | orchestrator | 2025-07-12 19:58:24.461284 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 19:58:24.461288 | orchestrator | Saturday 12 July 2025 19:58:13 +0000 (0:00:00.271) 0:00:00.444 ********* 2025-07-12 19:58:24.461295 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:24.461300 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:24.461305 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:24.461309 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:24.461314 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:24.461318 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:24.461323 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:24.461338 | orchestrator | 2025-07-12 19:58:24.461343 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-07-12 19:58:24.461348 | orchestrator | 2025-07-12 19:58:24.461352 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 19:58:24.461357 | orchestrator | Saturday 12 July 2025 19:58:17 +0000 (0:00:03.937) 0:00:04.382 ********* 2025-07-12 19:58:24.461362 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-07-12 19:58:24.461367 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-07-12 19:58:24.461371 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 19:58:24.461376 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-07-12 19:58:24.461381 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 19:58:24.461385 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 19:58:24.461390 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-07-12 19:58:24.461395 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-07-12 19:58:24.461399 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 19:58:24.461404 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-07-12 19:58:24.461408 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-07-12 19:58:24.461413 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-07-12 19:58:24.461418 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 19:58:24.461422 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-07-12 19:58:24.461427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-07-12 19:58:24.461431 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-12 19:58:24.461436 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:58:24.461441 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-12 19:58:24.461445 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 19:58:24.461450 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 19:58:24.461455 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-07-12 19:58:24.461459 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-12 19:58:24.461464 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 19:58:24.461468 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-12 19:58:24.461473 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 19:58:24.461478 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:24.461482 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 19:58:24.461487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 19:58:24.461491 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-12 19:58:24.461496 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 19:58:24.461501 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-07-12 19:58:24.461505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 19:58:24.461510 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-12 19:58:24.461515 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 19:58:24.461519 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 19:58:24.461524 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 19:58:24.461528 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-07-12 19:58:24.461533 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-07-12 19:58:24.461537 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 19:58:24.461542 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 19:58:24.461550 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:24.461554 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-07-12 19:58:24.461559 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-07-12 19:58:24.461564 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 19:58:24.461569 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-07-12 19:58:24.461573 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-07-12 19:58:24.461587 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-07-12 19:58:24.461592 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:24.461597 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-07-12 19:58:24.461601 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-07-12 19:58:24.461606 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:24.461611 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-07-12 19:58:24.461615 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:24.461620 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-07-12 19:58:24.461625 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-07-12 19:58:24.461629 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:24.461634 | orchestrator | 2025-07-12 19:58:24.461639 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-07-12 19:58:24.461644 | orchestrator | 2025-07-12 19:58:24.461648 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-07-12 19:58:24.461656 | orchestrator | Saturday 12 July 2025 19:58:17 +0000 (0:00:00.386) 0:00:04.769 ********* 2025-07-12 19:58:24.461660 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:24.461665 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:24.461670 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:24.461674 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:24.461681 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:24.461688 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:24.461695 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:24.461701 | orchestrator | 2025-07-12 19:58:24.461708 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-07-12 19:58:24.461715 | orchestrator | Saturday 12 July 2025 19:58:18 +0000 (0:00:01.251) 0:00:06.021 ********* 2025-07-12 19:58:24.461722 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:24.461729 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:24.461735 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:24.461752 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:24.461760 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:24.461767 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:24.461774 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:24.461788 | orchestrator | 2025-07-12 19:58:24.461796 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-07-12 19:58:24.461804 | orchestrator | Saturday 12 July 2025 19:58:19 +0000 (0:00:01.186) 0:00:07.207 ********* 2025-07-12 19:58:24.461812 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:58:24.461822 | orchestrator | 2025-07-12 19:58:24.461827 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-07-12 19:58:24.461832 | orchestrator | Saturday 12 July 2025 19:58:20 +0000 (0:00:00.258) 0:00:07.465 ********* 2025-07-12 19:58:24.461848 | orchestrator | changed: [testbed-manager] 2025-07-12 19:58:24.461857 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:24.461862 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:24.461867 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:24.461872 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:24.461876 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:24.461881 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:24.461900 | orchestrator | 2025-07-12 19:58:24.461906 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-07-12 19:58:24.461910 | orchestrator | Saturday 12 July 2025 19:58:22 +0000 (0:00:01.987) 0:00:09.452 ********* 2025-07-12 19:58:24.461915 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:58:24.461921 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:58:24.461928 | orchestrator | 2025-07-12 19:58:24.461959 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-07-12 19:58:24.461965 | orchestrator | Saturday 12 July 2025 19:58:22 +0000 (0:00:00.246) 0:00:09.699 ********* 2025-07-12 19:58:24.461970 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:24.461975 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:24.461980 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:24.461985 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:24.461990 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:24.461994 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:24.461999 | orchestrator | 2025-07-12 19:58:24.462004 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-07-12 19:58:24.462009 | orchestrator | Saturday 12 July 2025 19:58:23 +0000 (0:00:00.988) 0:00:10.688 ********* 2025-07-12 19:58:24.462038 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:58:24.462045 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-07-12 19:58:24.462050 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-07-12 19:58:24.462055 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-07-12 19:58:24.462060 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:24.462065 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:24.462070 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:24.462075 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:24.462080 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:24.462084 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:24.462089 | orchestrator | 2025-07-12 19:58:24.462094 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-07-12 19:58:24.462099 | orchestrator | Saturday 12 July 2025 19:58:24 +0000 (0:00:00.532) 0:00:11.220 ********* 2025-07-12 19:58:24.462104 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:24.462109 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:24.462113 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:24.462118 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:24.462122 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:24.462126 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:24.462136 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.703305 | orchestrator | 2025-07-12 19:58:35.703416 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-07-12 19:58:35.703484 | orchestrator | Saturday 12 July 2025 19:58:24 +0000 (0:00:00.437) 0:00:11.658 ********* 2025-07-12 19:58:35.703499 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:58:35.703512 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:35.703523 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:35.703534 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:35.703545 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:35.703556 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:35.703567 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:35.703577 | orchestrator | 2025-07-12 19:58:35.703589 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-07-12 19:58:35.703600 | orchestrator | Saturday 12 July 2025 19:58:24 +0000 (0:00:00.205) 0:00:11.863 ********* 2025-07-12 19:58:35.703614 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:58:35.703667 | orchestrator | 2025-07-12 19:58:35.703679 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-07-12 19:58:35.703690 | orchestrator | Saturday 12 July 2025 19:58:24 +0000 (0:00:00.327) 0:00:12.191 ********* 2025-07-12 19:58:35.703702 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:58:35.703713 | orchestrator | 2025-07-12 19:58:35.703724 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-07-12 19:58:35.703734 | orchestrator | Saturday 12 July 2025 19:58:25 +0000 (0:00:00.324) 0:00:12.516 ********* 2025-07-12 19:58:35.703745 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.703779 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:35.703790 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:35.703815 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:35.703845 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:35.703857 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:35.703869 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:35.703881 | orchestrator | 2025-07-12 19:58:35.703894 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-07-12 19:58:35.703934 | orchestrator | Saturday 12 July 2025 19:58:26 +0000 (0:00:01.287) 0:00:13.803 ********* 2025-07-12 19:58:35.703946 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:58:35.703959 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:35.703971 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:35.703982 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:35.703994 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:35.704006 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:35.704054 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:35.704067 | orchestrator | 2025-07-12 19:58:35.704079 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-07-12 19:58:35.704091 | orchestrator | Saturday 12 July 2025 19:58:26 +0000 (0:00:00.191) 0:00:13.994 ********* 2025-07-12 19:58:35.704104 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.704116 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:35.704128 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:35.704142 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:35.704161 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:35.704179 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:35.704194 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:35.704211 | orchestrator | 2025-07-12 19:58:35.704229 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-07-12 19:58:35.704308 | orchestrator | Saturday 12 July 2025 19:58:27 +0000 (0:00:00.549) 0:00:14.544 ********* 2025-07-12 19:58:35.704388 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:58:35.704408 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:35.704420 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:35.704431 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:35.704442 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:35.704452 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:35.704463 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:35.704474 | orchestrator | 2025-07-12 19:58:35.704485 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-07-12 19:58:35.704497 | orchestrator | Saturday 12 July 2025 19:58:27 +0000 (0:00:00.212) 0:00:14.756 ********* 2025-07-12 19:58:35.704508 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.704519 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:35.704530 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:35.704540 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:35.704551 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:35.704574 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:35.704585 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:35.704596 | orchestrator | 2025-07-12 19:58:35.704607 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-07-12 19:58:35.704617 | orchestrator | Saturday 12 July 2025 19:58:28 +0000 (0:00:00.556) 0:00:15.313 ********* 2025-07-12 19:58:35.704628 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.704639 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:35.704649 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:35.704660 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:35.704670 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:35.704681 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:35.704691 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:35.704702 | orchestrator | 2025-07-12 19:58:35.704713 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-07-12 19:58:35.704723 | orchestrator | Saturday 12 July 2025 19:58:29 +0000 (0:00:01.080) 0:00:16.393 ********* 2025-07-12 19:58:35.704734 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:35.704745 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:35.704755 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:35.704766 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:35.704777 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.704809 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:35.704821 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:35.704832 | orchestrator | 2025-07-12 19:58:35.704843 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-07-12 19:58:35.704854 | orchestrator | Saturday 12 July 2025 19:58:30 +0000 (0:00:01.166) 0:00:17.559 ********* 2025-07-12 19:58:35.704865 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:58:35.704876 | orchestrator | 2025-07-12 19:58:35.704887 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-07-12 19:58:35.704921 | orchestrator | Saturday 12 July 2025 19:58:30 +0000 (0:00:00.491) 0:00:18.051 ********* 2025-07-12 19:58:35.704934 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:58:35.704951 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:35.704962 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:35.704973 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:58:35.704986 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:58:35.705005 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:58:35.705022 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:35.705039 | orchestrator | 2025-07-12 19:58:35.705066 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-07-12 19:58:35.705087 | orchestrator | Saturday 12 July 2025 19:58:32 +0000 (0:00:01.294) 0:00:19.345 ********* 2025-07-12 19:58:35.705106 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.705122 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:35.705132 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:35.705166 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:35.705178 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:35.705188 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:35.705199 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:35.705209 | orchestrator | 2025-07-12 19:58:35.705221 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-07-12 19:58:35.705232 | orchestrator | Saturday 12 July 2025 19:58:32 +0000 (0:00:00.250) 0:00:19.596 ********* 2025-07-12 19:58:35.705242 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.705253 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:35.705263 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:35.705274 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:35.705284 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:35.705295 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:35.705306 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:35.705327 | orchestrator | 2025-07-12 19:58:35.705338 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-07-12 19:58:35.705349 | orchestrator | Saturday 12 July 2025 19:58:32 +0000 (0:00:00.251) 0:00:19.847 ********* 2025-07-12 19:58:35.705361 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.705381 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:35.705401 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:35.705420 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:35.705437 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:35.705469 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:35.705489 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:35.705509 | orchestrator | 2025-07-12 19:58:35.705521 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-07-12 19:58:35.705531 | orchestrator | Saturday 12 July 2025 19:58:32 +0000 (0:00:00.239) 0:00:20.087 ********* 2025-07-12 19:58:35.705543 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:58:35.705556 | orchestrator | 2025-07-12 19:58:35.705567 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-07-12 19:58:35.705578 | orchestrator | Saturday 12 July 2025 19:58:33 +0000 (0:00:00.333) 0:00:20.421 ********* 2025-07-12 19:58:35.705588 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:35.705599 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.705609 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:35.705620 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:35.705630 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:35.705641 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:35.705651 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:35.705662 | orchestrator | 2025-07-12 19:58:35.705673 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-07-12 19:58:35.705683 | orchestrator | Saturday 12 July 2025 19:58:33 +0000 (0:00:00.572) 0:00:20.993 ********* 2025-07-12 19:58:35.705694 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:58:35.705705 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:58:35.705738 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:58:35.705749 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:58:35.705760 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:58:35.705770 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:58:35.705781 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:58:35.705791 | orchestrator | 2025-07-12 19:58:35.705802 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-07-12 19:58:35.705813 | orchestrator | Saturday 12 July 2025 19:58:34 +0000 (0:00:00.245) 0:00:21.238 ********* 2025-07-12 19:58:35.705827 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.705845 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:58:35.705861 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:58:35.705879 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:58:35.705993 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:35.706098 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:35.706124 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:35.706142 | orchestrator | 2025-07-12 19:58:35.706160 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-07-12 19:58:35.706178 | orchestrator | Saturday 12 July 2025 19:58:35 +0000 (0:00:01.076) 0:00:22.314 ********* 2025-07-12 19:58:35.706196 | orchestrator | ok: [testbed-manager] 2025-07-12 19:58:35.706215 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:58:35.706233 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:58:35.706278 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:58:35.706320 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:58:35.706337 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:58:35.706354 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:58:35.706370 | orchestrator | 2025-07-12 19:58:35.706407 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-07-12 19:59:06.797586 | orchestrator | Saturday 12 July 2025 19:58:35 +0000 (0:00:00.584) 0:00:22.899 ********* 2025-07-12 19:59:06.797706 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.797724 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.797735 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.797747 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:59:06.797759 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:59:06.797786 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:59:06.797797 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.797809 | orchestrator | 2025-07-12 19:59:06.797820 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-07-12 19:59:06.797833 | orchestrator | Saturday 12 July 2025 19:58:36 +0000 (0:00:01.066) 0:00:23.965 ********* 2025-07-12 19:59:06.797844 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.797855 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.797865 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.797876 | orchestrator | changed: [testbed-manager] 2025-07-12 19:59:06.797903 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:59:06.797915 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:59:06.797974 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:59:06.797985 | orchestrator | 2025-07-12 19:59:06.797998 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-07-12 19:59:06.798009 | orchestrator | Saturday 12 July 2025 19:58:51 +0000 (0:00:15.003) 0:00:38.969 ********* 2025-07-12 19:59:06.798077 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.798089 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.798100 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.798112 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.798124 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.798136 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.798149 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.798161 | orchestrator | 2025-07-12 19:59:06.798173 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-07-12 19:59:06.798186 | orchestrator | Saturday 12 July 2025 19:58:52 +0000 (0:00:00.243) 0:00:39.212 ********* 2025-07-12 19:59:06.798198 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.798210 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.798222 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.798234 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.798246 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.798258 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.798270 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.798283 | orchestrator | 2025-07-12 19:59:06.798296 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-07-12 19:59:06.798309 | orchestrator | Saturday 12 July 2025 19:58:52 +0000 (0:00:00.251) 0:00:39.464 ********* 2025-07-12 19:59:06.798321 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.798333 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.798346 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.798358 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.798370 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.798382 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.798394 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.798405 | orchestrator | 2025-07-12 19:59:06.798418 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-07-12 19:59:06.798431 | orchestrator | Saturday 12 July 2025 19:58:52 +0000 (0:00:00.253) 0:00:39.717 ********* 2025-07-12 19:59:06.798447 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:59:06.798462 | orchestrator | 2025-07-12 19:59:06.798474 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-07-12 19:59:06.798485 | orchestrator | Saturday 12 July 2025 19:58:52 +0000 (0:00:00.349) 0:00:40.067 ********* 2025-07-12 19:59:06.798525 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.798545 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.798565 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.798583 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.798603 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.798624 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.798645 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.798664 | orchestrator | 2025-07-12 19:59:06.798676 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-07-12 19:59:06.798687 | orchestrator | Saturday 12 July 2025 19:58:54 +0000 (0:00:01.689) 0:00:41.756 ********* 2025-07-12 19:59:06.798697 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:59:06.798708 | orchestrator | changed: [testbed-manager] 2025-07-12 19:59:06.798719 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:59:06.798729 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:59:06.798740 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:59:06.798751 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:59:06.798762 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:59:06.798773 | orchestrator | 2025-07-12 19:59:06.798784 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-07-12 19:59:06.798794 | orchestrator | Saturday 12 July 2025 19:58:55 +0000 (0:00:01.126) 0:00:42.883 ********* 2025-07-12 19:59:06.798805 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.798816 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.798827 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.798838 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.798848 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.798859 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.798869 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.798880 | orchestrator | 2025-07-12 19:59:06.798891 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-07-12 19:59:06.798901 | orchestrator | Saturday 12 July 2025 19:58:56 +0000 (0:00:00.850) 0:00:43.733 ********* 2025-07-12 19:59:06.798913 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:59:06.798964 | orchestrator | 2025-07-12 19:59:06.798976 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-07-12 19:59:06.798988 | orchestrator | Saturday 12 July 2025 19:58:56 +0000 (0:00:00.303) 0:00:44.037 ********* 2025-07-12 19:59:06.799020 | orchestrator | changed: [testbed-manager] 2025-07-12 19:59:06.799032 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:59:06.799043 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:59:06.799054 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:59:06.799065 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:59:06.799076 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:59:06.799086 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:59:06.799097 | orchestrator | 2025-07-12 19:59:06.799108 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-07-12 19:59:06.799119 | orchestrator | Saturday 12 July 2025 19:58:57 +0000 (0:00:01.012) 0:00:45.050 ********* 2025-07-12 19:59:06.799130 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:59:06.799141 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:59:06.799152 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:59:06.799163 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:59:06.799173 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:59:06.799184 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:59:06.799196 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:59:06.799206 | orchestrator | 2025-07-12 19:59:06.799217 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-07-12 19:59:06.799228 | orchestrator | Saturday 12 July 2025 19:58:58 +0000 (0:00:00.309) 0:00:45.360 ********* 2025-07-12 19:59:06.799239 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.799261 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.799272 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.799283 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.799294 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.799304 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.799315 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.799326 | orchestrator | 2025-07-12 19:59:06.799337 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-07-12 19:59:06.799348 | orchestrator | Saturday 12 July 2025 19:59:01 +0000 (0:00:03.620) 0:00:48.980 ********* 2025-07-12 19:59:06.799359 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.799370 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.799380 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.799391 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.799402 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.799412 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.799423 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.799434 | orchestrator | 2025-07-12 19:59:06.799445 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-07-12 19:59:06.799456 | orchestrator | Saturday 12 July 2025 19:59:02 +0000 (0:00:00.875) 0:00:49.855 ********* 2025-07-12 19:59:06.799467 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.799478 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.799488 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.799499 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.799510 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.799520 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.799531 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.799542 | orchestrator | 2025-07-12 19:59:06.799553 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-07-12 19:59:06.799564 | orchestrator | Saturday 12 July 2025 19:59:03 +0000 (0:00:00.908) 0:00:50.764 ********* 2025-07-12 19:59:06.799579 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.799612 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.799635 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.799655 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.799675 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.799689 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.799699 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.799710 | orchestrator | 2025-07-12 19:59:06.799721 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-07-12 19:59:06.799732 | orchestrator | Saturday 12 July 2025 19:59:03 +0000 (0:00:00.220) 0:00:50.984 ********* 2025-07-12 19:59:06.799743 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.799754 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.799764 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.799775 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.799785 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.799796 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.799806 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.799817 | orchestrator | 2025-07-12 19:59:06.799828 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-07-12 19:59:06.799839 | orchestrator | Saturday 12 July 2025 19:59:04 +0000 (0:00:00.233) 0:00:51.218 ********* 2025-07-12 19:59:06.799850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 19:59:06.799861 | orchestrator | 2025-07-12 19:59:06.799872 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-07-12 19:59:06.799883 | orchestrator | Saturday 12 July 2025 19:59:04 +0000 (0:00:00.286) 0:00:51.505 ********* 2025-07-12 19:59:06.799893 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.799904 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.799915 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.799956 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.799975 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.799986 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.799997 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.800007 | orchestrator | 2025-07-12 19:59:06.800018 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-07-12 19:59:06.800029 | orchestrator | Saturday 12 July 2025 19:59:05 +0000 (0:00:01.672) 0:00:53.177 ********* 2025-07-12 19:59:06.800040 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.800050 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.800061 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:06.800071 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.800082 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:06.800092 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:06.800103 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:06.800114 | orchestrator | 2025-07-12 19:59:06.800124 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-07-12 19:59:06.800135 | orchestrator | Saturday 12 July 2025 19:59:06 +0000 (0:00:00.584) 0:00:53.762 ********* 2025-07-12 19:59:06.800146 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:06.800157 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:06.800167 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:06.800185 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:41.047108 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:41.047207 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:41.047222 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:41.047234 | orchestrator | 2025-07-12 19:59:41.047247 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-07-12 19:59:41.047259 | orchestrator | Saturday 12 July 2025 19:59:06 +0000 (0:00:00.236) 0:00:53.999 ********* 2025-07-12 19:59:41.047270 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:41.047281 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:41.047292 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:41.047303 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:41.047314 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:41.047325 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:41.047335 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:41.047346 | orchestrator | 2025-07-12 19:59:41.047358 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-07-12 19:59:41.047382 | orchestrator | Saturday 12 July 2025 19:59:08 +0000 (0:00:01.220) 0:00:55.219 ********* 2025-07-12 19:59:41.047394 | orchestrator | changed: [testbed-manager] 2025-07-12 19:59:41.047406 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:59:41.047417 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:59:41.047428 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:59:41.047439 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:59:41.047450 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:59:41.047460 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:59:41.047471 | orchestrator | 2025-07-12 19:59:41.047483 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-07-12 19:59:41.047494 | orchestrator | Saturday 12 July 2025 19:59:09 +0000 (0:00:01.736) 0:00:56.956 ********* 2025-07-12 19:59:41.047505 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:41.047516 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:41.047527 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:41.047538 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:41.047548 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:41.047559 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:41.047570 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:41.047581 | orchestrator | 2025-07-12 19:59:41.047592 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-07-12 19:59:41.047603 | orchestrator | Saturday 12 July 2025 19:59:12 +0000 (0:00:02.450) 0:00:59.407 ********* 2025-07-12 19:59:41.047614 | orchestrator | changed: [testbed-manager] 2025-07-12 19:59:41.047625 | orchestrator | changed: [testbed-node-2] 2025-07-12 19:59:41.047636 | orchestrator | changed: [testbed-node-0] 2025-07-12 19:59:41.047647 | orchestrator | changed: [testbed-node-1] 2025-07-12 19:59:41.047678 | orchestrator | changed: [testbed-node-5] 2025-07-12 19:59:41.047692 | orchestrator | changed: [testbed-node-4] 2025-07-12 19:59:41.047705 | orchestrator | changed: [testbed-node-3] 2025-07-12 19:59:41.047717 | orchestrator | 2025-07-12 19:59:41.047730 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-07-12 19:59:41.047742 | orchestrator | Saturday 12 July 2025 19:59:13 +0000 (0:00:01.520) 0:01:00.927 ********* 2025-07-12 19:59:41.047755 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:41.047767 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:41.047779 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:41.047790 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:41.047801 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:41.047812 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:41.047823 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:41.047833 | orchestrator | 2025-07-12 19:59:41.047844 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-07-12 19:59:41.047855 | orchestrator | Saturday 12 July 2025 19:59:15 +0000 (0:00:01.823) 0:01:02.751 ********* 2025-07-12 19:59:41.047866 | orchestrator | ok: [testbed-manager] 2025-07-12 19:59:41.047877 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:41.047888 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:41.047899 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:41.047910 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:41.047920 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:41.047953 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:41.047965 | orchestrator | 2025-07-12 19:59:41.047976 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-07-12 19:59:41.047988 | orchestrator | Saturday 12 July 2025 19:59:17 +0000 (0:00:01.659) 0:01:04.411 ********* 2025-07-12 19:59:41.047999 | orchestrator | ok: [testbed-node-2] 2025-07-12 19:59:41.048010 | orchestrator | ok: [testbed-node-0] 2025-07-12 19:59:41.048020 | orchestrator | ok: [testbed-node-4] 2025-07-12 19:59:41.048031 | orchestrator | ok: [testbed-node-1] 2025-07-12 19:59:41.048042 | orchestrator | ok: [testbed-node-3] 2025-07-12 19:59:41.048052 | orchestrator | ok: [testbed-node-5] 2025-07-12 19:59:41.048063 | orchestrator | changed: [testbed-manager] 2025-07-12 19:59:41.048074 | orchestrator | 2025-07-12 19:59:41.048085 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-07-12 19:59:41.048096 | orchestrator | Saturday 12 July 2025 19:59:39 +0000 (0:00:21.806) 0:01:26.217 ********* 2025-07-12 19:59:41.048116 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-07-12 19:59:41.048156 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-07-12 19:59:41.048174 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-07-12 19:59:41.048192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-07-12 19:59:41.048211 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-07-12 19:59:41.048223 | orchestrator | 2025-07-12 19:59:41.048234 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-07-12 19:59:41.048245 | orchestrator | Saturday 12 July 2025 19:59:39 +0000 (0:00:00.282) 0:01:26.500 ********* 2025-07-12 19:59:41.048256 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 19:59:41.048267 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:59:41.048278 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 19:59:41.048289 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 19:59:41.048300 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:59:41.048311 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:59:41.048322 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-07-12 19:59:41.048333 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:59:41.048344 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 19:59:41.048355 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 19:59:41.048365 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 19:59:41.048376 | orchestrator | 2025-07-12 19:59:41.048387 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-07-12 19:59:41.048398 | orchestrator | Saturday 12 July 2025 19:59:40 +0000 (0:00:01.595) 0:01:28.096 ********* 2025-07-12 19:59:41.048408 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 19:59:41.048420 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 19:59:41.048431 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 19:59:41.048442 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 19:59:41.048453 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 19:59:41.048464 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 19:59:41.048475 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 19:59:41.048486 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 19:59:41.048496 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 19:59:41.048507 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 19:59:41.048518 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:59:41.048529 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 19:59:41.048540 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 19:59:41.048551 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 19:59:41.048562 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 19:59:41.048580 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 19:59:41.048591 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 19:59:41.048602 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 19:59:41.048613 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 19:59:41.048631 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 19:59:47.203265 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 19:59:47.203336 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 19:59:47.203347 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 19:59:47.203355 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 19:59:47.203362 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 19:59:47.203369 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 19:59:47.203376 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 19:59:47.203384 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 19:59:47.203391 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 19:59:47.203398 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 19:59:47.203406 | orchestrator | skipping: [testbed-node-3] 2025-07-12 19:59:47.203414 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 19:59:47.203421 | orchestrator | skipping: [testbed-node-4] 2025-07-12 19:59:47.203428 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-07-12 19:59:47.203436 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-07-12 19:59:47.203443 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-07-12 19:59:47.203450 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-07-12 19:59:47.203457 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-07-12 19:59:47.203465 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-07-12 19:59:47.203472 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-07-12 19:59:47.203479 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-07-12 19:59:47.203487 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-07-12 19:59:47.203494 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-07-12 19:59:47.203501 | orchestrator | skipping: [testbed-node-5] 2025-07-12 19:59:47.203508 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 19:59:47.203515 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 19:59:47.203522 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-07-12 19:59:47.203529 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 19:59:47.203551 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 19:59:47.203558 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-07-12 19:59:47.203566 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 19:59:47.203592 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 19:59:47.203600 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-07-12 19:59:47.203607 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 19:59:47.203614 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 19:59:47.203621 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-07-12 19:59:47.203628 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 19:59:47.203635 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 19:59:47.203642 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-07-12 19:59:47.203649 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 19:59:47.203656 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 19:59:47.203667 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-07-12 19:59:47.203678 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 19:59:47.203700 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 19:59:47.203708 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-07-12 19:59:47.203715 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 19:59:47.203722 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 19:59:47.203729 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-07-12 19:59:47.203736 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 19:59:47.203747 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 19:59:47.203755 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-07-12 19:59:47.203762 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 19:59:47.203769 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 19:59:47.203776 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-07-12 19:59:47.203783 | orchestrator | 2025-07-12 19:59:47.203795 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-07-12 19:59:47.203805 | orchestrator | Saturday 12 July 2025 19:59:44 +0000 (0:00:03.546) 0:01:31.642 ********* 2025-07-12 19:59:47.203812 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:59:47.203819 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:59:47.203826 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:59:47.203833 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:59:47.203840 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:59:47.203852 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:59:47.203859 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-07-12 19:59:47.203866 | orchestrator | 2025-07-12 19:59:47.203873 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-07-12 19:59:47.203880 | orchestrator | Saturday 12 July 2025 19:59:45 +0000 (0:00:01.492) 0:01:33.135 ********* 2025-07-12 19:59:47.203887 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 19:59:47.203895 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:59:47.203902 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 19:59:47.203912 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 19:59:47.203920 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:59:47.203927 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:59:47.203949 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-07-12 19:59:47.203956 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:59:47.203963 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 19:59:47.203970 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 19:59:47.203978 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-07-12 19:59:47.203985 | orchestrator | 2025-07-12 19:59:47.203992 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-07-12 19:59:47.203999 | orchestrator | Saturday 12 July 2025 19:59:46 +0000 (0:00:00.566) 0:01:33.701 ********* 2025-07-12 19:59:47.204006 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 19:59:47.204013 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 19:59:47.204021 | orchestrator | skipping: [testbed-manager] 2025-07-12 19:59:47.204028 | orchestrator | skipping: [testbed-node-0] 2025-07-12 19:59:47.204035 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 19:59:47.204042 | orchestrator | skipping: [testbed-node-1] 2025-07-12 19:59:47.204049 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-07-12 19:59:47.204056 | orchestrator | skipping: [testbed-node-2] 2025-07-12 19:59:47.204063 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 19:59:47.204071 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 19:59:47.204078 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-07-12 19:59:47.204085 | orchestrator | 2025-07-12 19:59:47.204092 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-07-12 19:59:47.204099 | orchestrator | Saturday 12 July 2025 19:59:47 +0000 (0:00:00.639) 0:01:34.340 ********* 2025-07-12 19:59:47.204111 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:08.030142 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:08.030234 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:08.030249 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:08.030260 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:08.030270 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:08.030280 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:08.030290 | orchestrator | 2025-07-12 20:00:08.030302 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-07-12 20:00:08.030313 | orchestrator | Saturday 12 July 2025 19:59:47 +0000 (0:00:00.249) 0:01:34.590 ********* 2025-07-12 20:00:08.030343 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.030354 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.030364 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.030373 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.030383 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.030392 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.030413 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.030423 | orchestrator | 2025-07-12 20:00:08.030433 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-07-12 20:00:08.030443 | orchestrator | Saturday 12 July 2025 19:59:53 +0000 (0:00:05.689) 0:01:40.280 ********* 2025-07-12 20:00:08.030452 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-07-12 20:00:08.030467 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-07-12 20:00:08.030483 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:08.030499 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-07-12 20:00:08.030514 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:08.030532 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-07-12 20:00:08.030549 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:08.030566 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-07-12 20:00:08.030577 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:08.030587 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-07-12 20:00:08.030599 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:08.030610 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:08.030621 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-07-12 20:00:08.030638 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:08.030655 | orchestrator | 2025-07-12 20:00:08.030672 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-07-12 20:00:08.030688 | orchestrator | Saturday 12 July 2025 19:59:53 +0000 (0:00:00.339) 0:01:40.620 ********* 2025-07-12 20:00:08.030705 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-07-12 20:00:08.030722 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-07-12 20:00:08.030741 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-07-12 20:00:08.030758 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-07-12 20:00:08.030774 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-07-12 20:00:08.030792 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-07-12 20:00:08.030810 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-07-12 20:00:08.030827 | orchestrator | 2025-07-12 20:00:08.030843 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-07-12 20:00:08.030855 | orchestrator | Saturday 12 July 2025 19:59:54 +0000 (0:00:00.953) 0:01:41.573 ********* 2025-07-12 20:00:08.030868 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:00:08.030882 | orchestrator | 2025-07-12 20:00:08.030893 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-07-12 20:00:08.030904 | orchestrator | Saturday 12 July 2025 19:59:54 +0000 (0:00:00.356) 0:01:41.930 ********* 2025-07-12 20:00:08.030916 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.030927 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.030978 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.030998 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.031015 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.031031 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.031041 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.031051 | orchestrator | 2025-07-12 20:00:08.031061 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-07-12 20:00:08.031070 | orchestrator | Saturday 12 July 2025 19:59:56 +0000 (0:00:02.230) 0:01:44.160 ********* 2025-07-12 20:00:08.031080 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.031100 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.031110 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.031119 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.031129 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.031138 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.031147 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.031157 | orchestrator | 2025-07-12 20:00:08.031167 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-07-12 20:00:08.031177 | orchestrator | Saturday 12 July 2025 19:59:57 +0000 (0:00:00.555) 0:01:44.715 ********* 2025-07-12 20:00:08.031187 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.031196 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.031206 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.031215 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.031225 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.031234 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.031243 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.031253 | orchestrator | 2025-07-12 20:00:08.031263 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-07-12 20:00:08.031272 | orchestrator | Saturday 12 July 2025 19:59:58 +0000 (0:00:00.568) 0:01:45.284 ********* 2025-07-12 20:00:08.031282 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.031291 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.031301 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.031310 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.031320 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.031329 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.031339 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.031348 | orchestrator | 2025-07-12 20:00:08.031358 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-07-12 20:00:08.031368 | orchestrator | Saturday 12 July 2025 19:59:58 +0000 (0:00:00.537) 0:01:45.822 ********* 2025-07-12 20:00:08.031378 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:08.031405 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:08.031416 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:08.031426 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:08.031436 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:08.031445 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:08.031455 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:08.031464 | orchestrator | 2025-07-12 20:00:08.031474 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-07-12 20:00:08.031484 | orchestrator | Saturday 12 July 2025 19:59:58 +0000 (0:00:00.215) 0:01:46.037 ********* 2025-07-12 20:00:08.031493 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.031503 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.031512 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.031522 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.031531 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.031541 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.031557 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.031567 | orchestrator | 2025-07-12 20:00:08.031577 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-07-12 20:00:08.031586 | orchestrator | Saturday 12 July 2025 19:59:59 +0000 (0:00:01.037) 0:01:47.074 ********* 2025-07-12 20:00:08.031596 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.031606 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.031615 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.031624 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.031634 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.031643 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.031653 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.031662 | orchestrator | 2025-07-12 20:00:08.031672 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-07-12 20:00:08.031682 | orchestrator | Saturday 12 July 2025 20:00:01 +0000 (0:00:01.718) 0:01:48.793 ********* 2025-07-12 20:00:08.031691 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.031707 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.031717 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.031726 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.031736 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.031745 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.031755 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.031764 | orchestrator | 2025-07-12 20:00:08.031774 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-07-12 20:00:08.031784 | orchestrator | Saturday 12 July 2025 20:00:02 +0000 (0:00:01.051) 0:01:49.845 ********* 2025-07-12 20:00:08.031793 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:08.031803 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:08.031813 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:08.031822 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:08.031832 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:08.031841 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:08.031851 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:08.031860 | orchestrator | 2025-07-12 20:00:08.031870 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-07-12 20:00:08.031879 | orchestrator | Saturday 12 July 2025 20:00:02 +0000 (0:00:00.243) 0:01:50.088 ********* 2025-07-12 20:00:08.031889 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.031899 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.031908 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.031918 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.031927 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.031955 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.031965 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.031974 | orchestrator | 2025-07-12 20:00:08.031984 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-07-12 20:00:08.031994 | orchestrator | Saturday 12 July 2025 20:00:03 +0000 (0:00:00.686) 0:01:50.774 ********* 2025-07-12 20:00:08.032004 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:00:08.032014 | orchestrator | 2025-07-12 20:00:08.032024 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-07-12 20:00:08.032033 | orchestrator | Saturday 12 July 2025 20:00:03 +0000 (0:00:00.314) 0:01:51.089 ********* 2025-07-12 20:00:08.032043 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.032053 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.032062 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.032072 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.032081 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.032091 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.032100 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.032110 | orchestrator | 2025-07-12 20:00:08.032119 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-07-12 20:00:08.032129 | orchestrator | Saturday 12 July 2025 20:00:05 +0000 (0:00:01.651) 0:01:52.741 ********* 2025-07-12 20:00:08.032138 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.032148 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.032158 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.032167 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.032177 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.032186 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.032196 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.032205 | orchestrator | 2025-07-12 20:00:08.032215 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-07-12 20:00:08.032224 | orchestrator | Saturday 12 July 2025 20:00:06 +0000 (0:00:01.202) 0:01:53.943 ********* 2025-07-12 20:00:08.032234 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:08.032244 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:08.032253 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:08.032270 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:08.032279 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:08.032289 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:08.032298 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:08.032308 | orchestrator | 2025-07-12 20:00:08.032318 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-07-12 20:00:08.032328 | orchestrator | Saturday 12 July 2025 20:00:07 +0000 (0:00:00.919) 0:01:54.863 ********* 2025-07-12 20:00:08.032345 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:00:42.105986 | orchestrator | 2025-07-12 20:00:42.106137 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-07-12 20:00:42.106155 | orchestrator | Saturday 12 July 2025 20:00:08 +0000 (0:00:00.367) 0:01:55.230 ********* 2025-07-12 20:00:42.106168 | orchestrator | changed: [testbed-manager] 2025-07-12 20:00:42.106180 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:00:42.106191 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:00:42.106202 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:00:42.106212 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:42.106223 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:42.106234 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:42.106245 | orchestrator | 2025-07-12 20:00:42.106256 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-07-12 20:00:42.106268 | orchestrator | Saturday 12 July 2025 20:00:18 +0000 (0:00:10.036) 0:02:05.266 ********* 2025-07-12 20:00:42.106279 | orchestrator | changed: [testbed-manager] 2025-07-12 20:00:42.106290 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:42.106300 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:42.106311 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:42.106322 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:00:42.106333 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:00:42.106343 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:00:42.106354 | orchestrator | 2025-07-12 20:00:42.106366 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-07-12 20:00:42.106377 | orchestrator | Saturday 12 July 2025 20:00:18 +0000 (0:00:00.542) 0:02:05.808 ********* 2025-07-12 20:00:42.106388 | orchestrator | changed: [testbed-manager] 2025-07-12 20:00:42.106399 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:42.106410 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:42.106420 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:00:42.106431 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:42.106442 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:00:42.106452 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:00:42.106463 | orchestrator | 2025-07-12 20:00:42.106490 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-07-12 20:00:42.106502 | orchestrator | Saturday 12 July 2025 20:00:19 +0000 (0:00:01.061) 0:02:06.870 ********* 2025-07-12 20:00:42.106516 | orchestrator | changed: [testbed-manager] 2025-07-12 20:00:42.106529 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:42.106541 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:42.106553 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:42.106565 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:00:42.106577 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:00:42.106588 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:00:42.106600 | orchestrator | 2025-07-12 20:00:42.106612 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-07-12 20:00:42.106625 | orchestrator | Saturday 12 July 2025 20:00:20 +0000 (0:00:01.078) 0:02:07.948 ********* 2025-07-12 20:00:42.106637 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:42.106650 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:42.106662 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:42.106674 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:42.106707 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:42.106719 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:42.106732 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:42.106744 | orchestrator | 2025-07-12 20:00:42.106756 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-07-12 20:00:42.106769 | orchestrator | Saturday 12 July 2025 20:00:21 +0000 (0:00:00.280) 0:02:08.228 ********* 2025-07-12 20:00:42.106782 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:42.106794 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:42.106806 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:42.106819 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:42.106831 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:42.106843 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:42.106855 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:42.106868 | orchestrator | 2025-07-12 20:00:42.106880 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-07-12 20:00:42.106892 | orchestrator | Saturday 12 July 2025 20:00:21 +0000 (0:00:00.239) 0:02:08.468 ********* 2025-07-12 20:00:42.106903 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:42.106914 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:42.106924 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:42.106935 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:42.106963 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:42.106974 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:42.106985 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:42.106995 | orchestrator | 2025-07-12 20:00:42.107007 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-07-12 20:00:42.107017 | orchestrator | Saturday 12 July 2025 20:00:21 +0000 (0:00:00.270) 0:02:08.738 ********* 2025-07-12 20:00:42.107029 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:42.107040 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:42.107050 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:42.107061 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:42.107072 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:42.107082 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:42.107093 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:42.107104 | orchestrator | 2025-07-12 20:00:42.107115 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-07-12 20:00:42.107126 | orchestrator | Saturday 12 July 2025 20:00:27 +0000 (0:00:05.696) 0:02:14.435 ********* 2025-07-12 20:00:42.107138 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:00:42.107151 | orchestrator | 2025-07-12 20:00:42.107162 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-07-12 20:00:42.107173 | orchestrator | Saturday 12 July 2025 20:00:27 +0000 (0:00:00.331) 0:02:14.767 ********* 2025-07-12 20:00:42.107184 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-07-12 20:00:42.107195 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-07-12 20:00:42.107206 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:42.107217 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-07-12 20:00:42.107245 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-07-12 20:00:42.107257 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:42.107268 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-07-12 20:00:42.107279 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-07-12 20:00:42.107289 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-07-12 20:00:42.107300 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-07-12 20:00:42.107311 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:42.107321 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-07-12 20:00:42.107339 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:42.107356 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-07-12 20:00:42.107367 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-07-12 20:00:42.107378 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-07-12 20:00:42.107388 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:42.107399 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:42.107410 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-07-12 20:00:42.107421 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-07-12 20:00:42.107431 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:42.107442 | orchestrator | 2025-07-12 20:00:42.107453 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-07-12 20:00:42.107464 | orchestrator | Saturday 12 July 2025 20:00:27 +0000 (0:00:00.289) 0:02:15.056 ********* 2025-07-12 20:00:42.107475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:00:42.107486 | orchestrator | 2025-07-12 20:00:42.107497 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-07-12 20:00:42.107508 | orchestrator | Saturday 12 July 2025 20:00:28 +0000 (0:00:00.353) 0:02:15.409 ********* 2025-07-12 20:00:42.107519 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-07-12 20:00:42.107529 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:42.107540 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-07-12 20:00:42.107551 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-07-12 20:00:42.107561 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:42.107572 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-07-12 20:00:42.107583 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:42.107594 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:42.107604 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-07-12 20:00:42.107615 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-07-12 20:00:42.107626 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:42.107636 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:42.107647 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-07-12 20:00:42.107658 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:42.107669 | orchestrator | 2025-07-12 20:00:42.107679 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-07-12 20:00:42.107690 | orchestrator | Saturday 12 July 2025 20:00:28 +0000 (0:00:00.265) 0:02:15.675 ********* 2025-07-12 20:00:42.107701 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:00:42.107712 | orchestrator | 2025-07-12 20:00:42.107723 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-07-12 20:00:42.107733 | orchestrator | Saturday 12 July 2025 20:00:28 +0000 (0:00:00.359) 0:02:16.034 ********* 2025-07-12 20:00:42.107744 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:42.107754 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:42.107765 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:42.107776 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:42.107786 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:42.107797 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:42.107807 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:42.107818 | orchestrator | 2025-07-12 20:00:42.107829 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-07-12 20:00:42.107840 | orchestrator | Saturday 12 July 2025 20:00:30 +0000 (0:00:01.306) 0:02:17.341 ********* 2025-07-12 20:00:42.107856 | orchestrator | changed: [testbed-manager] 2025-07-12 20:00:42.107867 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:00:42.107878 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:00:42.107889 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:00:42.107899 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:42.107910 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:42.107921 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:42.107931 | orchestrator | 2025-07-12 20:00:42.107958 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-07-12 20:00:42.107969 | orchestrator | Saturday 12 July 2025 20:00:39 +0000 (0:00:09.035) 0:02:26.376 ********* 2025-07-12 20:00:42.107980 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:42.107991 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:42.108002 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:42.108013 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:42.108024 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:42.108035 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:42.108046 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:42.108057 | orchestrator | 2025-07-12 20:00:42.108068 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-07-12 20:00:42.108078 | orchestrator | Saturday 12 July 2025 20:00:40 +0000 (0:00:01.253) 0:02:27.630 ********* 2025-07-12 20:00:42.108089 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:42.108100 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:42.108117 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:54.281006 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:54.281109 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:54.281124 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:54.281136 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:54.281147 | orchestrator | 2025-07-12 20:00:54.281159 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-07-12 20:00:54.281172 | orchestrator | Saturday 12 July 2025 20:00:42 +0000 (0:00:01.676) 0:02:29.306 ********* 2025-07-12 20:00:54.281184 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:54.281195 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:54.281206 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:54.281217 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:54.281228 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:54.281239 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:54.281264 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:54.281275 | orchestrator | 2025-07-12 20:00:54.281287 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-07-12 20:00:54.281298 | orchestrator | Saturday 12 July 2025 20:00:44 +0000 (0:00:02.175) 0:02:31.482 ********* 2025-07-12 20:00:54.281309 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:00:54.281322 | orchestrator | 2025-07-12 20:00:54.281334 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-07-12 20:00:54.281346 | orchestrator | Saturday 12 July 2025 20:00:44 +0000 (0:00:00.377) 0:02:31.859 ********* 2025-07-12 20:00:54.281357 | orchestrator | changed: [testbed-manager] 2025-07-12 20:00:54.281369 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:54.281380 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:54.281391 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:54.281402 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:00:54.281413 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:00:54.281431 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:00:54.281449 | orchestrator | 2025-07-12 20:00:54.281467 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-07-12 20:00:54.281486 | orchestrator | Saturday 12 July 2025 20:00:45 +0000 (0:00:00.641) 0:02:32.501 ********* 2025-07-12 20:00:54.281504 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:54.281523 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:54.281571 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:54.281591 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:54.281608 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:54.281627 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:54.281646 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:54.281665 | orchestrator | 2025-07-12 20:00:54.281686 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-07-12 20:00:54.281706 | orchestrator | Saturday 12 July 2025 20:00:46 +0000 (0:00:01.671) 0:02:34.173 ********* 2025-07-12 20:00:54.281726 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:00:54.281740 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:00:54.281753 | orchestrator | changed: [testbed-manager] 2025-07-12 20:00:54.281765 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:00:54.281777 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:00:54.281790 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:00:54.281801 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:00:54.281813 | orchestrator | 2025-07-12 20:00:54.281826 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-07-12 20:00:54.281839 | orchestrator | Saturday 12 July 2025 20:00:47 +0000 (0:00:00.824) 0:02:34.997 ********* 2025-07-12 20:00:54.281851 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:54.281862 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:54.281873 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:54.281883 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:54.281894 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:54.281904 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:54.281915 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:54.281926 | orchestrator | 2025-07-12 20:00:54.281959 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-07-12 20:00:54.281972 | orchestrator | Saturday 12 July 2025 20:00:48 +0000 (0:00:00.275) 0:02:35.272 ********* 2025-07-12 20:00:54.281983 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:54.281994 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:54.282005 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:54.282062 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:54.282074 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:54.282086 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:54.282096 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:54.282107 | orchestrator | 2025-07-12 20:00:54.282118 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-07-12 20:00:54.282129 | orchestrator | Saturday 12 July 2025 20:00:48 +0000 (0:00:00.393) 0:02:35.665 ********* 2025-07-12 20:00:54.282141 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:54.282152 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:54.282163 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:54.282174 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:54.282185 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:54.282195 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:54.282206 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:54.282217 | orchestrator | 2025-07-12 20:00:54.282228 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-07-12 20:00:54.282239 | orchestrator | Saturday 12 July 2025 20:00:48 +0000 (0:00:00.271) 0:02:35.937 ********* 2025-07-12 20:00:54.282250 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:54.282261 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:54.282271 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:54.282282 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:54.282293 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:54.282304 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:54.282314 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:54.282325 | orchestrator | 2025-07-12 20:00:54.282336 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-07-12 20:00:54.282348 | orchestrator | Saturday 12 July 2025 20:00:49 +0000 (0:00:00.280) 0:02:36.218 ********* 2025-07-12 20:00:54.282369 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:54.282381 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:54.282392 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:54.282422 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:54.282434 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:54.282445 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:54.282455 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:54.282466 | orchestrator | 2025-07-12 20:00:54.282477 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-07-12 20:00:54.282488 | orchestrator | Saturday 12 July 2025 20:00:49 +0000 (0:00:00.307) 0:02:36.525 ********* 2025-07-12 20:00:54.282499 | orchestrator | ok: [testbed-manager] =>  2025-07-12 20:00:54.282510 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 20:00:54.282520 | orchestrator | ok: [testbed-node-0] =>  2025-07-12 20:00:54.282531 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 20:00:54.282542 | orchestrator | ok: [testbed-node-1] =>  2025-07-12 20:00:54.282559 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 20:00:54.282570 | orchestrator | ok: [testbed-node-2] =>  2025-07-12 20:00:54.282581 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 20:00:54.282592 | orchestrator | ok: [testbed-node-3] =>  2025-07-12 20:00:54.282603 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 20:00:54.282613 | orchestrator | ok: [testbed-node-4] =>  2025-07-12 20:00:54.282624 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 20:00:54.282635 | orchestrator | ok: [testbed-node-5] =>  2025-07-12 20:00:54.282645 | orchestrator |  docker_version: 5:27.5.1 2025-07-12 20:00:54.282656 | orchestrator | 2025-07-12 20:00:54.282667 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-07-12 20:00:54.282678 | orchestrator | Saturday 12 July 2025 20:00:49 +0000 (0:00:00.246) 0:02:36.772 ********* 2025-07-12 20:00:54.282688 | orchestrator | ok: [testbed-manager] =>  2025-07-12 20:00:54.282699 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 20:00:54.282710 | orchestrator | ok: [testbed-node-0] =>  2025-07-12 20:00:54.282721 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 20:00:54.282731 | orchestrator | ok: [testbed-node-1] =>  2025-07-12 20:00:54.282742 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 20:00:54.282752 | orchestrator | ok: [testbed-node-2] =>  2025-07-12 20:00:54.282763 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 20:00:54.282774 | orchestrator | ok: [testbed-node-3] =>  2025-07-12 20:00:54.282784 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 20:00:54.282795 | orchestrator | ok: [testbed-node-4] =>  2025-07-12 20:00:54.282805 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 20:00:54.282816 | orchestrator | ok: [testbed-node-5] =>  2025-07-12 20:00:54.282827 | orchestrator |  docker_cli_version: 5:27.5.1 2025-07-12 20:00:54.282837 | orchestrator | 2025-07-12 20:00:54.282848 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-07-12 20:00:54.282859 | orchestrator | Saturday 12 July 2025 20:00:49 +0000 (0:00:00.219) 0:02:36.991 ********* 2025-07-12 20:00:54.282870 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:54.282880 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:54.282891 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:54.282902 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:54.282912 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:54.282923 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:54.282933 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:54.282971 | orchestrator | 2025-07-12 20:00:54.282991 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-07-12 20:00:54.283010 | orchestrator | Saturday 12 July 2025 20:00:50 +0000 (0:00:00.333) 0:02:37.325 ********* 2025-07-12 20:00:54.283028 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:00:54.283040 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:00:54.283050 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:00:54.283061 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:00:54.283071 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:00:54.283097 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:00:54.283113 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:00:54.283130 | orchestrator | 2025-07-12 20:00:54.283148 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-07-12 20:00:54.283166 | orchestrator | Saturday 12 July 2025 20:00:50 +0000 (0:00:00.239) 0:02:37.565 ********* 2025-07-12 20:00:54.283186 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:00:54.283206 | orchestrator | 2025-07-12 20:00:54.283224 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-07-12 20:00:54.283243 | orchestrator | Saturday 12 July 2025 20:00:50 +0000 (0:00:00.389) 0:02:37.954 ********* 2025-07-12 20:00:54.283261 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:54.283280 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:54.283299 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:54.283318 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:54.283337 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:54.283355 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:54.283375 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:54.283394 | orchestrator | 2025-07-12 20:00:54.283413 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-07-12 20:00:54.283433 | orchestrator | Saturday 12 July 2025 20:00:51 +0000 (0:00:00.836) 0:02:38.791 ********* 2025-07-12 20:00:54.283454 | orchestrator | ok: [testbed-manager] 2025-07-12 20:00:54.283472 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:00:54.283491 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:00:54.283509 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:00:54.283527 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:00:54.283547 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:00:54.283566 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:00:54.283585 | orchestrator | 2025-07-12 20:00:54.283603 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-07-12 20:00:54.283623 | orchestrator | Saturday 12 July 2025 20:00:54 +0000 (0:00:02.546) 0:02:41.338 ********* 2025-07-12 20:00:54.283641 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-07-12 20:00:54.283660 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-07-12 20:00:54.283680 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-07-12 20:00:54.283699 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-07-12 20:00:54.283714 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-07-12 20:00:54.283740 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-07-12 20:01:18.707746 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:18.707877 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-07-12 20:01:18.707893 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-07-12 20:01:18.707906 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-07-12 20:01:18.707918 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:18.707929 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-07-12 20:01:18.707992 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-07-12 20:01:18.708006 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-07-12 20:01:18.708018 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:18.708029 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-07-12 20:01:18.708039 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-07-12 20:01:18.708050 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-07-12 20:01:18.708061 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:18.708073 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-07-12 20:01:18.708084 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-07-12 20:01:18.708118 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-07-12 20:01:18.708129 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:18.708140 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:18.708151 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-07-12 20:01:18.708161 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-07-12 20:01:18.708172 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-07-12 20:01:18.708183 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:18.708194 | orchestrator | 2025-07-12 20:01:18.708206 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-07-12 20:01:18.708218 | orchestrator | Saturday 12 July 2025 20:00:54 +0000 (0:00:00.664) 0:02:42.002 ********* 2025-07-12 20:01:18.708249 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:18.708261 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:18.708273 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:18.708285 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:18.708298 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:18.708310 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:18.708322 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:18.708335 | orchestrator | 2025-07-12 20:01:18.708367 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-07-12 20:01:18.708380 | orchestrator | Saturday 12 July 2025 20:00:56 +0000 (0:00:01.823) 0:02:43.826 ********* 2025-07-12 20:01:18.708393 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:18.708406 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:18.708418 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:18.708431 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:18.708444 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:18.708456 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:18.708469 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:18.708481 | orchestrator | 2025-07-12 20:01:18.708494 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-07-12 20:01:18.708506 | orchestrator | Saturday 12 July 2025 20:00:57 +0000 (0:00:01.016) 0:02:44.843 ********* 2025-07-12 20:01:18.708519 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:18.708531 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:18.708544 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:18.708556 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:18.708568 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:18.708580 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:18.708592 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:18.708605 | orchestrator | 2025-07-12 20:01:18.708618 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-07-12 20:01:18.708630 | orchestrator | Saturday 12 July 2025 20:00:58 +0000 (0:00:00.949) 0:02:45.792 ********* 2025-07-12 20:01:18.708641 | orchestrator | changed: [testbed-manager] 2025-07-12 20:01:18.708652 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:18.708663 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:01:18.708680 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:18.708699 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:18.708716 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:01:18.708728 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:01:18.708738 | orchestrator | 2025-07-12 20:01:18.708750 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-07-12 20:01:18.708761 | orchestrator | Saturday 12 July 2025 20:01:01 +0000 (0:00:03.310) 0:02:49.102 ********* 2025-07-12 20:01:18.708772 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:18.708783 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:18.708793 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:18.708804 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:18.708815 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:18.708826 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:18.708837 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:18.708848 | orchestrator | 2025-07-12 20:01:18.708859 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-07-12 20:01:18.708878 | orchestrator | Saturday 12 July 2025 20:01:03 +0000 (0:00:01.516) 0:02:50.618 ********* 2025-07-12 20:01:18.708889 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:18.708901 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:18.708911 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:18.708922 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:18.708933 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:18.708967 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:18.708978 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:18.708988 | orchestrator | 2025-07-12 20:01:18.708999 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-07-12 20:01:18.709011 | orchestrator | Saturday 12 July 2025 20:01:04 +0000 (0:00:01.328) 0:02:51.946 ********* 2025-07-12 20:01:18.709021 | orchestrator | changed: [testbed-manager] 2025-07-12 20:01:18.709032 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:18.709043 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:18.709054 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:18.709065 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:01:18.709076 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:01:18.709087 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:01:18.709098 | orchestrator | 2025-07-12 20:01:18.709109 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-07-12 20:01:18.709139 | orchestrator | Saturday 12 July 2025 20:01:05 +0000 (0:00:00.944) 0:02:52.891 ********* 2025-07-12 20:01:18.709151 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:18.709162 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:18.709173 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:18.709183 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:18.709194 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:18.709205 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:18.709216 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:18.709226 | orchestrator | 2025-07-12 20:01:18.709237 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-07-12 20:01:18.709248 | orchestrator | Saturday 12 July 2025 20:01:07 +0000 (0:00:02.227) 0:02:55.118 ********* 2025-07-12 20:01:18.709259 | orchestrator | changed: [testbed-manager] 2025-07-12 20:01:18.709275 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:18.709287 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:18.709298 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:18.709308 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:01:18.709319 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:01:18.709330 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:01:18.709341 | orchestrator | 2025-07-12 20:01:18.709351 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-07-12 20:01:18.709362 | orchestrator | Saturday 12 July 2025 20:01:08 +0000 (0:00:00.998) 0:02:56.117 ********* 2025-07-12 20:01:18.709373 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:18.709384 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:18.709395 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:18.709405 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:18.709416 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:18.709427 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:18.709438 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:18.709448 | orchestrator | 2025-07-12 20:01:18.709459 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-07-12 20:01:18.709470 | orchestrator | Saturday 12 July 2025 20:01:11 +0000 (0:00:02.328) 0:02:58.445 ********* 2025-07-12 20:01:18.709481 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:18.709492 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:18.709503 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:18.709513 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:18.709524 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:18.709535 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:18.709545 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:18.709556 | orchestrator | 2025-07-12 20:01:18.709574 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-07-12 20:01:18.709585 | orchestrator | Saturday 12 July 2025 20:01:13 +0000 (0:00:02.049) 0:03:00.495 ********* 2025-07-12 20:01:18.709596 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-07-12 20:01:18.709607 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-07-12 20:01:18.709618 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-07-12 20:01:18.709629 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-07-12 20:01:18.709640 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-07-12 20:01:18.709651 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-07-12 20:01:18.709661 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-07-12 20:01:18.709672 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-07-12 20:01:18.709683 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-07-12 20:01:18.709694 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-07-12 20:01:18.709705 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-07-12 20:01:18.709715 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-07-12 20:01:18.709726 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-07-12 20:01:18.709737 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-07-12 20:01:18.709747 | orchestrator | 2025-07-12 20:01:18.709758 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-07-12 20:01:18.709769 | orchestrator | Saturday 12 July 2025 20:01:14 +0000 (0:00:01.261) 0:03:01.757 ********* 2025-07-12 20:01:18.709780 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:18.709791 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:18.709802 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:18.709813 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:18.709823 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:18.709834 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:18.709845 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:18.709856 | orchestrator | 2025-07-12 20:01:18.709866 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-07-12 20:01:18.709878 | orchestrator | Saturday 12 July 2025 20:01:15 +0000 (0:00:00.508) 0:03:02.266 ********* 2025-07-12 20:01:18.709888 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:18.709899 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:18.709910 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:18.709921 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:18.709932 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:18.709968 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:18.709987 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:18.710005 | orchestrator | 2025-07-12 20:01:18.710090 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-07-12 20:01:18.710105 | orchestrator | Saturday 12 July 2025 20:01:17 +0000 (0:00:02.836) 0:03:05.102 ********* 2025-07-12 20:01:18.710116 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:18.710127 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:18.710137 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:18.710148 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:18.710159 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:18.710169 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:18.710180 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:18.710190 | orchestrator | 2025-07-12 20:01:18.710202 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-07-12 20:01:18.710213 | orchestrator | Saturday 12 July 2025 20:01:18 +0000 (0:00:00.672) 0:03:05.774 ********* 2025-07-12 20:01:18.710223 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-07-12 20:01:18.710234 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-07-12 20:01:18.710245 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:18.710274 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-07-12 20:01:37.914522 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-07-12 20:01:37.914608 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:37.914614 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-07-12 20:01:37.914618 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-07-12 20:01:37.914623 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:37.914627 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-07-12 20:01:37.914631 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-07-12 20:01:37.914635 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:37.914650 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-07-12 20:01:37.914654 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-07-12 20:01:37.914658 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:37.914661 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-07-12 20:01:37.914665 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-07-12 20:01:37.914669 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:37.914673 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-07-12 20:01:37.914677 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-07-12 20:01:37.914680 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:37.914684 | orchestrator | 2025-07-12 20:01:37.914689 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-07-12 20:01:37.914695 | orchestrator | Saturday 12 July 2025 20:01:19 +0000 (0:00:00.554) 0:03:06.328 ********* 2025-07-12 20:01:37.914698 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:37.914702 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:37.914706 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:37.914710 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:37.914713 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:37.914717 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:37.914721 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:37.914724 | orchestrator | 2025-07-12 20:01:37.914728 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-07-12 20:01:37.914733 | orchestrator | Saturday 12 July 2025 20:01:19 +0000 (0:00:00.473) 0:03:06.802 ********* 2025-07-12 20:01:37.914736 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:37.914740 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:37.914744 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:37.914747 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:37.914751 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:37.914755 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:37.914759 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:37.914763 | orchestrator | 2025-07-12 20:01:37.914767 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-07-12 20:01:37.914770 | orchestrator | Saturday 12 July 2025 20:01:20 +0000 (0:00:00.568) 0:03:07.370 ********* 2025-07-12 20:01:37.914774 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:37.914778 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:37.914782 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:37.914785 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:37.914789 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:37.914793 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:37.914796 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:37.914800 | orchestrator | 2025-07-12 20:01:37.914804 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-07-12 20:01:37.914808 | orchestrator | Saturday 12 July 2025 20:01:20 +0000 (0:00:00.718) 0:03:08.088 ********* 2025-07-12 20:01:37.914811 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:37.914815 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:37.914819 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:37.914836 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:37.914840 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:37.914844 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:37.914847 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:37.914851 | orchestrator | 2025-07-12 20:01:37.914855 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-07-12 20:01:37.914858 | orchestrator | Saturday 12 July 2025 20:01:22 +0000 (0:00:01.574) 0:03:09.662 ********* 2025-07-12 20:01:37.914863 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:01:37.914869 | orchestrator | 2025-07-12 20:01:37.914872 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-07-12 20:01:37.914876 | orchestrator | Saturday 12 July 2025 20:01:23 +0000 (0:00:00.855) 0:03:10.518 ********* 2025-07-12 20:01:37.914880 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:37.914884 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:37.914887 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:37.914891 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:37.914895 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:37.914899 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:37.914902 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:37.914906 | orchestrator | 2025-07-12 20:01:37.914910 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-07-12 20:01:37.914914 | orchestrator | Saturday 12 July 2025 20:01:24 +0000 (0:00:00.823) 0:03:11.342 ********* 2025-07-12 20:01:37.914917 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:37.914921 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:37.914925 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:37.914928 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:37.914932 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:37.914967 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:37.914971 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:37.914975 | orchestrator | 2025-07-12 20:01:37.914979 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-07-12 20:01:37.914983 | orchestrator | Saturday 12 July 2025 20:01:25 +0000 (0:00:01.271) 0:03:12.613 ********* 2025-07-12 20:01:37.914987 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:37.914990 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:37.914994 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:37.915008 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:37.915012 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:37.915015 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:37.915019 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:37.915023 | orchestrator | 2025-07-12 20:01:37.915026 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-07-12 20:01:37.915030 | orchestrator | Saturday 12 July 2025 20:01:26 +0000 (0:00:01.517) 0:03:14.131 ********* 2025-07-12 20:01:37.915034 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:37.915038 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:37.915041 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:37.915045 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:37.915049 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:37.915055 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:37.915059 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:37.915063 | orchestrator | 2025-07-12 20:01:37.915066 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-07-12 20:01:37.915071 | orchestrator | Saturday 12 July 2025 20:01:27 +0000 (0:00:00.558) 0:03:14.689 ********* 2025-07-12 20:01:37.915075 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:37.915080 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:37.915084 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:37.915088 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:37.915092 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:37.915100 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:37.915105 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:37.915109 | orchestrator | 2025-07-12 20:01:37.915113 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-07-12 20:01:37.915118 | orchestrator | Saturday 12 July 2025 20:01:28 +0000 (0:00:01.381) 0:03:16.071 ********* 2025-07-12 20:01:37.915122 | orchestrator | changed: [testbed-manager] 2025-07-12 20:01:37.915126 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:37.915130 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:37.915135 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:37.915139 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:01:37.915143 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:01:37.915147 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:01:37.915152 | orchestrator | 2025-07-12 20:01:37.915156 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-07-12 20:01:37.915160 | orchestrator | Saturday 12 July 2025 20:01:30 +0000 (0:00:01.621) 0:03:17.693 ********* 2025-07-12 20:01:37.915164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:01:37.915169 | orchestrator | 2025-07-12 20:01:37.915173 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-07-12 20:01:37.915177 | orchestrator | Saturday 12 July 2025 20:01:31 +0000 (0:00:01.039) 0:03:18.733 ********* 2025-07-12 20:01:37.915181 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:37.915185 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:37.915190 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:37.915194 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:37.915198 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:37.915202 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:37.915206 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:37.915210 | orchestrator | 2025-07-12 20:01:37.915215 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-07-12 20:01:37.915219 | orchestrator | Saturday 12 July 2025 20:01:32 +0000 (0:00:01.402) 0:03:20.136 ********* 2025-07-12 20:01:37.915223 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:37.915228 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:37.915232 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:37.915236 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:37.915240 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:37.915244 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:37.915248 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:37.915253 | orchestrator | 2025-07-12 20:01:37.915257 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-07-12 20:01:37.915261 | orchestrator | Saturday 12 July 2025 20:01:34 +0000 (0:00:01.230) 0:03:21.366 ********* 2025-07-12 20:01:37.915265 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:37.915270 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:37.915274 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:37.915278 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:37.915282 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:37.915286 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:37.915291 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:37.915295 | orchestrator | 2025-07-12 20:01:37.915299 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-07-12 20:01:37.915303 | orchestrator | Saturday 12 July 2025 20:01:35 +0000 (0:00:01.408) 0:03:22.774 ********* 2025-07-12 20:01:37.915308 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:37.915312 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:37.915317 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:37.915321 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:37.915325 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:37.915329 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:37.915333 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:37.915342 | orchestrator | 2025-07-12 20:01:37.915346 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-07-12 20:01:37.915350 | orchestrator | Saturday 12 July 2025 20:01:36 +0000 (0:00:01.205) 0:03:23.980 ********* 2025-07-12 20:01:37.915355 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:01:37.915359 | orchestrator | 2025-07-12 20:01:37.915363 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 20:01:37.915367 | orchestrator | Saturday 12 July 2025 20:01:37 +0000 (0:00:00.837) 0:03:24.818 ********* 2025-07-12 20:01:37.915372 | orchestrator | 2025-07-12 20:01:37.915376 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 20:01:37.915380 | orchestrator | Saturday 12 July 2025 20:01:37 +0000 (0:00:00.043) 0:03:24.861 ********* 2025-07-12 20:01:37.915384 | orchestrator | 2025-07-12 20:01:37.915391 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 20:01:58.970203 | orchestrator | Saturday 12 July 2025 20:01:37 +0000 (0:00:00.038) 0:03:24.900 ********* 2025-07-12 20:01:58.970308 | orchestrator | 2025-07-12 20:01:58.970325 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 20:01:58.970337 | orchestrator | Saturday 12 July 2025 20:01:37 +0000 (0:00:00.037) 0:03:24.938 ********* 2025-07-12 20:01:58.970349 | orchestrator | 2025-07-12 20:01:58.970360 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 20:01:58.970371 | orchestrator | Saturday 12 July 2025 20:01:37 +0000 (0:00:00.042) 0:03:24.981 ********* 2025-07-12 20:01:58.970382 | orchestrator | 2025-07-12 20:01:58.970393 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 20:01:58.970404 | orchestrator | Saturday 12 July 2025 20:01:37 +0000 (0:00:00.038) 0:03:25.019 ********* 2025-07-12 20:01:58.970414 | orchestrator | 2025-07-12 20:01:58.970425 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-07-12 20:01:58.970436 | orchestrator | Saturday 12 July 2025 20:01:37 +0000 (0:00:00.040) 0:03:25.060 ********* 2025-07-12 20:01:58.970447 | orchestrator | 2025-07-12 20:01:58.970458 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-07-12 20:01:58.970469 | orchestrator | Saturday 12 July 2025 20:01:37 +0000 (0:00:00.044) 0:03:25.105 ********* 2025-07-12 20:01:58.970480 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:58.970492 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:58.970503 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:58.970514 | orchestrator | 2025-07-12 20:01:58.970525 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-07-12 20:01:58.970536 | orchestrator | Saturday 12 July 2025 20:01:39 +0000 (0:00:01.296) 0:03:26.401 ********* 2025-07-12 20:01:58.970547 | orchestrator | changed: [testbed-manager] 2025-07-12 20:01:58.970559 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:58.970569 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:58.970580 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:58.970626 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:01:58.970638 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:01:58.970650 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:01:58.970661 | orchestrator | 2025-07-12 20:01:58.970672 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-07-12 20:01:58.970683 | orchestrator | Saturday 12 July 2025 20:01:40 +0000 (0:00:01.391) 0:03:27.793 ********* 2025-07-12 20:01:58.970694 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:58.970705 | orchestrator | changed: [testbed-manager] 2025-07-12 20:01:58.970716 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:58.970726 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:01:58.970737 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:01:58.970748 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:01:58.970762 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:58.970798 | orchestrator | 2025-07-12 20:01:58.970811 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-07-12 20:01:58.970823 | orchestrator | Saturday 12 July 2025 20:01:42 +0000 (0:00:01.812) 0:03:29.605 ********* 2025-07-12 20:01:58.970836 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:58.970848 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:01:58.970861 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:01:58.970873 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:01:58.970885 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:01:58.970898 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:01:58.970910 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:01:58.970922 | orchestrator | 2025-07-12 20:01:58.970935 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-07-12 20:01:58.970966 | orchestrator | Saturday 12 July 2025 20:01:44 +0000 (0:00:02.544) 0:03:32.150 ********* 2025-07-12 20:01:58.970978 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:58.970991 | orchestrator | 2025-07-12 20:01:58.971003 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-07-12 20:01:58.971015 | orchestrator | Saturday 12 July 2025 20:01:45 +0000 (0:00:00.102) 0:03:32.253 ********* 2025-07-12 20:01:58.971028 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:58.971041 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:58.971053 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:58.971065 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:58.971079 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:58.971091 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:58.971103 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:58.971115 | orchestrator | 2025-07-12 20:01:58.971128 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-07-12 20:01:58.971141 | orchestrator | Saturday 12 July 2025 20:01:46 +0000 (0:00:01.029) 0:03:33.282 ********* 2025-07-12 20:01:58.971152 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:58.971163 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:58.971174 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:58.971184 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:58.971195 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:58.971206 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:58.971216 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:58.971227 | orchestrator | 2025-07-12 20:01:58.971238 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-07-12 20:01:58.971249 | orchestrator | Saturday 12 July 2025 20:01:46 +0000 (0:00:00.713) 0:03:33.996 ********* 2025-07-12 20:01:58.971261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:01:58.971274 | orchestrator | 2025-07-12 20:01:58.971285 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-07-12 20:01:58.971296 | orchestrator | Saturday 12 July 2025 20:01:47 +0000 (0:00:00.868) 0:03:34.864 ********* 2025-07-12 20:01:58.971307 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:58.971318 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:58.971328 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:58.971339 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:58.971350 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:58.971375 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:58.971386 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:58.971397 | orchestrator | 2025-07-12 20:01:58.971424 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-07-12 20:01:58.971436 | orchestrator | Saturday 12 July 2025 20:01:48 +0000 (0:00:00.837) 0:03:35.701 ********* 2025-07-12 20:01:58.971447 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-07-12 20:01:58.971459 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-07-12 20:01:58.971470 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-07-12 20:01:58.971489 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-07-12 20:01:58.971505 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-07-12 20:01:58.971516 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-07-12 20:01:58.971527 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-07-12 20:01:58.971538 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-07-12 20:01:58.971549 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-07-12 20:01:58.971560 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-07-12 20:01:58.971571 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-07-12 20:01:58.971582 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-07-12 20:01:58.971593 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-07-12 20:01:58.971603 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-07-12 20:01:58.971614 | orchestrator | 2025-07-12 20:01:58.971625 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-07-12 20:01:58.971636 | orchestrator | Saturday 12 July 2025 20:01:51 +0000 (0:00:02.740) 0:03:38.442 ********* 2025-07-12 20:01:58.971647 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:58.971658 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:58.971668 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:58.971679 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:58.971690 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:58.971701 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:58.971711 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:58.971722 | orchestrator | 2025-07-12 20:01:58.971733 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-07-12 20:01:58.971744 | orchestrator | Saturday 12 July 2025 20:01:51 +0000 (0:00:00.512) 0:03:38.954 ********* 2025-07-12 20:01:58.971756 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:01:58.971769 | orchestrator | 2025-07-12 20:01:58.971780 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-07-12 20:01:58.971791 | orchestrator | Saturday 12 July 2025 20:01:52 +0000 (0:00:00.847) 0:03:39.802 ********* 2025-07-12 20:01:58.971802 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:58.971812 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:58.971823 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:58.971834 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:58.971844 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:58.971855 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:58.971865 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:58.971876 | orchestrator | 2025-07-12 20:01:58.971887 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-07-12 20:01:58.971898 | orchestrator | Saturday 12 July 2025 20:01:53 +0000 (0:00:00.979) 0:03:40.781 ********* 2025-07-12 20:01:58.971909 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:58.971919 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:58.971930 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:58.971955 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:58.971967 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:58.971977 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:58.971988 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:58.971999 | orchestrator | 2025-07-12 20:01:58.972010 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-07-12 20:01:58.972020 | orchestrator | Saturday 12 July 2025 20:01:54 +0000 (0:00:00.870) 0:03:41.652 ********* 2025-07-12 20:01:58.972031 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:58.972042 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:58.972065 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:58.972083 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:58.972102 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:58.972121 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:58.972140 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:58.972158 | orchestrator | 2025-07-12 20:01:58.972176 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-07-12 20:01:58.972188 | orchestrator | Saturday 12 July 2025 20:01:55 +0000 (0:00:00.588) 0:03:42.240 ********* 2025-07-12 20:01:58.972198 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:58.972209 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:01:58.972220 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:01:58.972231 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:01:58.972241 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:01:58.972252 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:01:58.972262 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:01:58.972273 | orchestrator | 2025-07-12 20:01:58.972284 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-07-12 20:01:58.972294 | orchestrator | Saturday 12 July 2025 20:01:56 +0000 (0:00:01.551) 0:03:43.791 ********* 2025-07-12 20:01:58.972305 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:01:58.972316 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:01:58.972327 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:01:58.972337 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:01:58.972348 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:01:58.972359 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:01:58.972369 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:01:58.972380 | orchestrator | 2025-07-12 20:01:58.972390 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-07-12 20:01:58.972401 | orchestrator | Saturday 12 July 2025 20:01:57 +0000 (0:00:00.435) 0:03:44.227 ********* 2025-07-12 20:01:58.972412 | orchestrator | ok: [testbed-manager] 2025-07-12 20:01:58.972432 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.664109 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.664226 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.664243 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.664255 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.664266 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.664277 | orchestrator | 2025-07-12 20:02:23.664290 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-07-12 20:02:23.664303 | orchestrator | Saturday 12 July 2025 20:01:58 +0000 (0:00:01.941) 0:03:46.169 ********* 2025-07-12 20:02:23.664314 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.664325 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.664336 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.664363 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.664375 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.664386 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.664396 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.664407 | orchestrator | 2025-07-12 20:02:23.664418 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-07-12 20:02:23.664429 | orchestrator | Saturday 12 July 2025 20:02:00 +0000 (0:00:01.268) 0:03:47.437 ********* 2025-07-12 20:02:23.664440 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.664451 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.664461 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.664472 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.664483 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.664494 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.664505 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.664516 | orchestrator | 2025-07-12 20:02:23.664527 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-07-12 20:02:23.664538 | orchestrator | Saturday 12 July 2025 20:02:01 +0000 (0:00:01.346) 0:03:48.784 ********* 2025-07-12 20:02:23.664549 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.664560 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.664595 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.664607 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.664618 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.664629 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.664641 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.664653 | orchestrator | 2025-07-12 20:02:23.664665 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 20:02:23.664678 | orchestrator | Saturday 12 July 2025 20:02:03 +0000 (0:00:01.478) 0:03:50.262 ********* 2025-07-12 20:02:23.664691 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.664703 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.664714 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.664724 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.664735 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.664745 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.664756 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.664805 | orchestrator | 2025-07-12 20:02:23.664829 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 20:02:23.664840 | orchestrator | Saturday 12 July 2025 20:02:03 +0000 (0:00:00.923) 0:03:51.185 ********* 2025-07-12 20:02:23.664851 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:02:23.664864 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:02:23.664875 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:02:23.664886 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:02:23.664896 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:02:23.664907 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:02:23.664917 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:02:23.664929 | orchestrator | 2025-07-12 20:02:23.664967 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-07-12 20:02:23.664980 | orchestrator | Saturday 12 July 2025 20:02:04 +0000 (0:00:00.708) 0:03:51.894 ********* 2025-07-12 20:02:23.664991 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:02:23.665002 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:02:23.665012 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:02:23.665023 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:02:23.665034 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:02:23.665044 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:02:23.665055 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:02:23.665066 | orchestrator | 2025-07-12 20:02:23.665076 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-07-12 20:02:23.665087 | orchestrator | Saturday 12 July 2025 20:02:05 +0000 (0:00:00.428) 0:03:52.322 ********* 2025-07-12 20:02:23.665098 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.665109 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.665120 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.665130 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.665141 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.665151 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.665162 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.665173 | orchestrator | 2025-07-12 20:02:23.665183 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-07-12 20:02:23.665194 | orchestrator | Saturday 12 July 2025 20:02:05 +0000 (0:00:00.440) 0:03:52.763 ********* 2025-07-12 20:02:23.665205 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.665216 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.665227 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.665237 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.665248 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.665258 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.665269 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.665279 | orchestrator | 2025-07-12 20:02:23.665290 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-07-12 20:02:23.665301 | orchestrator | Saturday 12 July 2025 20:02:06 +0000 (0:00:00.562) 0:03:53.326 ********* 2025-07-12 20:02:23.665312 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.665331 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.665342 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.665353 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.665363 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.665374 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.665384 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.665395 | orchestrator | 2025-07-12 20:02:23.665406 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-07-12 20:02:23.665416 | orchestrator | Saturday 12 July 2025 20:02:06 +0000 (0:00:00.468) 0:03:53.794 ********* 2025-07-12 20:02:23.665427 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.665438 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.665448 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.665459 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.665470 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.665480 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.665509 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.665521 | orchestrator | 2025-07-12 20:02:23.665533 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-07-12 20:02:23.665544 | orchestrator | Saturday 12 July 2025 20:02:12 +0000 (0:00:05.914) 0:03:59.709 ********* 2025-07-12 20:02:23.665555 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:02:23.665566 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:02:23.665577 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:02:23.665587 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:02:23.665598 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:02:23.665609 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:02:23.665626 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:02:23.665637 | orchestrator | 2025-07-12 20:02:23.665648 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-07-12 20:02:23.665659 | orchestrator | Saturday 12 July 2025 20:02:13 +0000 (0:00:00.541) 0:04:00.251 ********* 2025-07-12 20:02:23.665672 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:02:23.665685 | orchestrator | 2025-07-12 20:02:23.665697 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-07-12 20:02:23.665708 | orchestrator | Saturday 12 July 2025 20:02:14 +0000 (0:00:00.964) 0:04:01.216 ********* 2025-07-12 20:02:23.665719 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.665729 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.665740 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.665751 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.665762 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.665772 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.665783 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.665794 | orchestrator | 2025-07-12 20:02:23.665805 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-07-12 20:02:23.665816 | orchestrator | Saturday 12 July 2025 20:02:15 +0000 (0:00:01.925) 0:04:03.141 ********* 2025-07-12 20:02:23.665827 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.665837 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.665848 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.665859 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.665869 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.665880 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.665891 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.665901 | orchestrator | 2025-07-12 20:02:23.665912 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-07-12 20:02:23.665924 | orchestrator | Saturday 12 July 2025 20:02:17 +0000 (0:00:01.884) 0:04:05.025 ********* 2025-07-12 20:02:23.665934 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.665990 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.666010 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.666102 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.666115 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.666125 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.666136 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.666147 | orchestrator | 2025-07-12 20:02:23.666158 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-07-12 20:02:23.666169 | orchestrator | Saturday 12 July 2025 20:02:18 +0000 (0:00:00.828) 0:04:05.853 ********* 2025-07-12 20:02:23.666180 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 20:02:23.666193 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 20:02:23.666204 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 20:02:23.666215 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 20:02:23.666225 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 20:02:23.666236 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 20:02:23.666247 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-07-12 20:02:23.666258 | orchestrator | 2025-07-12 20:02:23.666269 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-07-12 20:02:23.666279 | orchestrator | Saturday 12 July 2025 20:02:20 +0000 (0:00:01.991) 0:04:07.845 ********* 2025-07-12 20:02:23.666291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:02:23.666302 | orchestrator | 2025-07-12 20:02:23.666313 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-07-12 20:02:23.666323 | orchestrator | Saturday 12 July 2025 20:02:21 +0000 (0:00:00.807) 0:04:08.652 ********* 2025-07-12 20:02:23.666334 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:23.666345 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:23.666356 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:23.666366 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:23.666377 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:23.666388 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:23.666398 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:23.666409 | orchestrator | 2025-07-12 20:02:23.666429 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-07-12 20:02:37.959605 | orchestrator | Saturday 12 July 2025 20:02:23 +0000 (0:00:02.199) 0:04:10.851 ********* 2025-07-12 20:02:37.959725 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:37.959741 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:37.959753 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:37.959765 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:37.959776 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:37.959787 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:37.959797 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:37.959809 | orchestrator | 2025-07-12 20:02:37.959821 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-07-12 20:02:37.959849 | orchestrator | Saturday 12 July 2025 20:02:25 +0000 (0:00:01.782) 0:04:12.634 ********* 2025-07-12 20:02:37.959861 | orchestrator | changed: [testbed-manager] 2025-07-12 20:02:37.959873 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:02:37.959884 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:02:37.959919 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:02:37.959931 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:02:37.959999 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:02:37.960011 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:02:37.960023 | orchestrator | 2025-07-12 20:02:37.960034 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-07-12 20:02:37.960045 | orchestrator | 2025-07-12 20:02:37.960056 | orchestrator | TASK [Include hardening role] ************************************************** 2025-07-12 20:02:37.960067 | orchestrator | Saturday 12 July 2025 20:02:27 +0000 (0:00:02.158) 0:04:14.792 ********* 2025-07-12 20:02:37.960078 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:02:37.960089 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:02:37.960100 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:02:37.960111 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:02:37.960122 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:02:37.960132 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:02:37.960144 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:02:37.960157 | orchestrator | 2025-07-12 20:02:37.960169 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-07-12 20:02:37.960182 | orchestrator | 2025-07-12 20:02:37.960194 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-07-12 20:02:37.960207 | orchestrator | Saturday 12 July 2025 20:02:28 +0000 (0:00:00.682) 0:04:15.475 ********* 2025-07-12 20:02:37.960219 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:37.960231 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:37.960244 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:37.960256 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:37.960268 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:37.960281 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:37.960293 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:37.960305 | orchestrator | 2025-07-12 20:02:37.960317 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-07-12 20:02:37.960329 | orchestrator | Saturday 12 July 2025 20:02:29 +0000 (0:00:01.258) 0:04:16.734 ********* 2025-07-12 20:02:37.960340 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:37.960350 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:37.960361 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:37.960373 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:37.960384 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:37.960395 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:37.960406 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:37.960416 | orchestrator | 2025-07-12 20:02:37.960427 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-07-12 20:02:37.960438 | orchestrator | Saturday 12 July 2025 20:02:30 +0000 (0:00:01.444) 0:04:18.178 ********* 2025-07-12 20:02:37.960449 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:02:37.960460 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:02:37.960471 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:02:37.960481 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:02:37.960492 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:02:37.960503 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:02:37.960514 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:02:37.960524 | orchestrator | 2025-07-12 20:02:37.960535 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-07-12 20:02:37.960546 | orchestrator | 2025-07-12 20:02:37.960557 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-07-12 20:02:37.960568 | orchestrator | Saturday 12 July 2025 20:02:31 +0000 (0:00:00.528) 0:04:18.706 ********* 2025-07-12 20:02:37.960580 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:02:37.960592 | orchestrator | 2025-07-12 20:02:37.960603 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-12 20:02:37.960614 | orchestrator | Saturday 12 July 2025 20:02:32 +0000 (0:00:00.945) 0:04:19.652 ********* 2025-07-12 20:02:37.960634 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:37.960644 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:37.960655 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:37.960666 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:37.960677 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:37.960688 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:37.960698 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:37.960709 | orchestrator | 2025-07-12 20:02:37.960720 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-12 20:02:37.960731 | orchestrator | Saturday 12 July 2025 20:02:33 +0000 (0:00:00.819) 0:04:20.471 ********* 2025-07-12 20:02:37.960742 | orchestrator | changed: [testbed-manager] 2025-07-12 20:02:37.960753 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:02:37.960764 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:02:37.960775 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:02:37.960785 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:02:37.960796 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:02:37.960807 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:02:37.960817 | orchestrator | 2025-07-12 20:02:37.960828 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-07-12 20:02:37.960839 | orchestrator | Saturday 12 July 2025 20:02:34 +0000 (0:00:01.159) 0:04:21.631 ********* 2025-07-12 20:02:37.960850 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:02:37.960861 | orchestrator | 2025-07-12 20:02:37.960892 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-07-12 20:02:37.960904 | orchestrator | Saturday 12 July 2025 20:02:35 +0000 (0:00:01.023) 0:04:22.654 ********* 2025-07-12 20:02:37.960915 | orchestrator | ok: [testbed-manager] 2025-07-12 20:02:37.960926 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:02:37.960936 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:02:37.960971 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:02:37.960983 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:02:37.960993 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:02:37.961004 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:02:37.961015 | orchestrator | 2025-07-12 20:02:37.961032 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-07-12 20:02:37.961043 | orchestrator | Saturday 12 July 2025 20:02:36 +0000 (0:00:00.853) 0:04:23.508 ********* 2025-07-12 20:02:37.961054 | orchestrator | changed: [testbed-manager] 2025-07-12 20:02:37.961065 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:02:37.961076 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:02:37.961087 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:02:37.961098 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:02:37.961109 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:02:37.961120 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:02:37.961131 | orchestrator | 2025-07-12 20:02:37.961142 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:02:37.961154 | orchestrator | testbed-manager : ok=160  changed=25  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2025-07-12 20:02:37.961165 | orchestrator | testbed-node-0 : ok=167  changed=35  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2025-07-12 20:02:37.961176 | orchestrator | testbed-node-1 : ok=167  changed=35  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-12 20:02:37.961187 | orchestrator | testbed-node-2 : ok=167  changed=35  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-12 20:02:37.961198 | orchestrator | testbed-node-3 : ok=166  changed=32  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-12 20:02:37.961216 | orchestrator | testbed-node-4 : ok=166  changed=32  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-12 20:02:37.961227 | orchestrator | testbed-node-5 : ok=166  changed=32  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-07-12 20:02:37.961238 | orchestrator | 2025-07-12 20:02:37.961249 | orchestrator | 2025-07-12 20:02:37.961260 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:02:37.961271 | orchestrator | Saturday 12 July 2025 20:02:37 +0000 (0:00:01.255) 0:04:24.763 ********* 2025-07-12 20:02:37.961282 | orchestrator | =============================================================================== 2025-07-12 20:02:37.961293 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 21.81s 2025-07-12 20:02:37.961304 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.00s 2025-07-12 20:02:37.961315 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.04s 2025-07-12 20:02:37.961326 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.04s 2025-07-12 20:02:37.961337 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.91s 2025-07-12 20:02:37.961347 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.70s 2025-07-12 20:02:37.961358 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.69s 2025-07-12 20:02:37.961369 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.94s 2025-07-12 20:02:37.961380 | orchestrator | osism.commons.systohc : Install util-linux-extra package ---------------- 3.62s 2025-07-12 20:02:37.961391 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 3.55s 2025-07-12 20:02:37.961402 | orchestrator | osism.services.docker : Update package cache ---------------------------- 3.31s 2025-07-12 20:02:37.961413 | orchestrator | osism.services.docker : Install python3 docker package from Debian Sid --- 2.84s 2025-07-12 20:02:37.961423 | orchestrator | osism.services.docker : Copy docker fact files -------------------------- 2.74s 2025-07-12 20:02:37.961434 | orchestrator | osism.services.docker : Gather package facts ---------------------------- 2.55s 2025-07-12 20:02:37.961445 | orchestrator | osism.services.docker : Restart docker service -------------------------- 2.54s 2025-07-12 20:02:37.961456 | orchestrator | osism.commons.packages : Upgrade packages ------------------------------- 2.45s 2025-07-12 20:02:37.961467 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 2.33s 2025-07-12 20:02:37.961477 | orchestrator | osism.commons.motd : Remove update-motd package ------------------------- 2.23s 2025-07-12 20:02:37.961488 | orchestrator | osism.services.docker : Install containerd package ---------------------- 2.23s 2025-07-12 20:02:37.961499 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 2.20s 2025-07-12 20:02:38.234750 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-12 20:02:38.234835 | orchestrator | + osism apply network 2025-07-12 20:02:50.222112 | orchestrator | 2025-07-12 20:02:50 | INFO  | Task 0b1ff60a-b30e-40d7-90f3-22a8b0fd7a5b (network) was prepared for execution. 2025-07-12 20:02:50.222227 | orchestrator | 2025-07-12 20:02:50 | INFO  | It takes a moment until task 0b1ff60a-b30e-40d7-90f3-22a8b0fd7a5b (network) has been started and output is visible here. 2025-07-12 20:03:18.703379 | orchestrator | 2025-07-12 20:03:18.703526 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-07-12 20:03:18.703556 | orchestrator | 2025-07-12 20:03:18.703578 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-07-12 20:03:18.703599 | orchestrator | Saturday 12 July 2025 20:02:54 +0000 (0:00:00.281) 0:00:00.281 ********* 2025-07-12 20:03:18.703620 | orchestrator | ok: [testbed-manager] 2025-07-12 20:03:18.703639 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:18.703659 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:18.703679 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:18.703797 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:03:18.703821 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:03:18.703840 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:03:18.703859 | orchestrator | 2025-07-12 20:03:18.703884 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-07-12 20:03:18.703906 | orchestrator | Saturday 12 July 2025 20:02:55 +0000 (0:00:00.751) 0:00:01.032 ********* 2025-07-12 20:03:18.703932 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:03:18.704051 | orchestrator | 2025-07-12 20:03:18.704075 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-07-12 20:03:18.704098 | orchestrator | Saturday 12 July 2025 20:02:56 +0000 (0:00:01.269) 0:00:02.302 ********* 2025-07-12 20:03:18.704122 | orchestrator | ok: [testbed-manager] 2025-07-12 20:03:18.704145 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:18.704167 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:18.704188 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:18.704206 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:03:18.704228 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:03:18.704248 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:03:18.704268 | orchestrator | 2025-07-12 20:03:18.704288 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-07-12 20:03:18.704308 | orchestrator | Saturday 12 July 2025 20:02:58 +0000 (0:00:02.243) 0:00:04.545 ********* 2025-07-12 20:03:18.704327 | orchestrator | ok: [testbed-manager] 2025-07-12 20:03:18.704348 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:18.704368 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:18.704388 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:18.704408 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:03:18.704428 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:03:18.704448 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:03:18.704468 | orchestrator | 2025-07-12 20:03:18.704488 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-07-12 20:03:18.704507 | orchestrator | Saturday 12 July 2025 20:03:00 +0000 (0:00:01.896) 0:00:06.442 ********* 2025-07-12 20:03:18.704526 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-07-12 20:03:18.704545 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-07-12 20:03:18.704564 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-07-12 20:03:18.704583 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-07-12 20:03:18.704601 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-07-12 20:03:18.704619 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-07-12 20:03:18.704637 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-07-12 20:03:18.704654 | orchestrator | 2025-07-12 20:03:18.704672 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-07-12 20:03:18.704690 | orchestrator | Saturday 12 July 2025 20:03:01 +0000 (0:00:00.987) 0:00:07.430 ********* 2025-07-12 20:03:18.704708 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:03:18.704727 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 20:03:18.704746 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:03:18.704764 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 20:03:18.704782 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 20:03:18.704801 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 20:03:18.704820 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 20:03:18.704840 | orchestrator | 2025-07-12 20:03:18.704860 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-07-12 20:03:18.704880 | orchestrator | Saturday 12 July 2025 20:03:04 +0000 (0:00:03.226) 0:00:10.657 ********* 2025-07-12 20:03:18.704900 | orchestrator | changed: [testbed-manager] 2025-07-12 20:03:18.704920 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:03:18.704940 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:03:18.705012 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:03:18.705031 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:03:18.705050 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:03:18.705069 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:03:18.705089 | orchestrator | 2025-07-12 20:03:18.705108 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-07-12 20:03:18.705125 | orchestrator | Saturday 12 July 2025 20:03:06 +0000 (0:00:01.680) 0:00:12.338 ********* 2025-07-12 20:03:18.705143 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:03:18.705160 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:03:18.705178 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 20:03:18.705196 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 20:03:18.705214 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 20:03:18.705232 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 20:03:18.705252 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 20:03:18.705269 | orchestrator | 2025-07-12 20:03:18.705289 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-07-12 20:03:18.705303 | orchestrator | Saturday 12 July 2025 20:03:08 +0000 (0:00:01.589) 0:00:13.928 ********* 2025-07-12 20:03:18.705314 | orchestrator | ok: [testbed-manager] 2025-07-12 20:03:18.705325 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:18.705336 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:18.705347 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:18.705357 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:03:18.705368 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:03:18.705378 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:03:18.705389 | orchestrator | 2025-07-12 20:03:18.705400 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-07-12 20:03:18.705435 | orchestrator | Saturday 12 July 2025 20:03:09 +0000 (0:00:01.120) 0:00:15.048 ********* 2025-07-12 20:03:18.705447 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:03:18.705458 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:18.705469 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:18.705480 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:18.705490 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:03:18.705501 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:03:18.705522 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:03:18.705533 | orchestrator | 2025-07-12 20:03:18.705544 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-07-12 20:03:18.705555 | orchestrator | Saturday 12 July 2025 20:03:09 +0000 (0:00:00.629) 0:00:15.678 ********* 2025-07-12 20:03:18.705566 | orchestrator | ok: [testbed-manager] 2025-07-12 20:03:18.705577 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:18.705587 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:18.705598 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:18.705609 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:03:18.705619 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:03:18.705630 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:03:18.705640 | orchestrator | 2025-07-12 20:03:18.705651 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-07-12 20:03:18.705662 | orchestrator | Saturday 12 July 2025 20:03:11 +0000 (0:00:02.193) 0:00:17.871 ********* 2025-07-12 20:03:18.705672 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:18.705683 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:18.705694 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:18.705705 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:03:18.705715 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:03:18.705726 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:03:18.705737 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-07-12 20:03:18.705822 | orchestrator | 2025-07-12 20:03:18.705834 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-07-12 20:03:18.705855 | orchestrator | Saturday 12 July 2025 20:03:12 +0000 (0:00:00.845) 0:00:18.717 ********* 2025-07-12 20:03:18.705864 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:18.705874 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:18.705884 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:18.705894 | orchestrator | ok: [testbed-manager] 2025-07-12 20:03:18.705903 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:03:18.705913 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:03:18.705922 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:03:18.705932 | orchestrator | 2025-07-12 20:03:18.705942 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-07-12 20:03:18.706007 | orchestrator | Saturday 12 July 2025 20:03:14 +0000 (0:00:01.440) 0:00:20.157 ********* 2025-07-12 20:03:18.706077 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:03:18.706093 | orchestrator | 2025-07-12 20:03:18.706103 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-12 20:03:18.706113 | orchestrator | Saturday 12 July 2025 20:03:15 +0000 (0:00:01.260) 0:00:21.418 ********* 2025-07-12 20:03:18.706122 | orchestrator | ok: [testbed-manager] 2025-07-12 20:03:18.706132 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:18.706177 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:18.706187 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:18.706196 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:03:18.706206 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:03:18.706215 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:03:18.706224 | orchestrator | 2025-07-12 20:03:18.706234 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-07-12 20:03:18.706244 | orchestrator | Saturday 12 July 2025 20:03:16 +0000 (0:00:01.229) 0:00:22.647 ********* 2025-07-12 20:03:18.706254 | orchestrator | ok: [testbed-manager] 2025-07-12 20:03:18.706263 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:18.706273 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:18.706282 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:18.706291 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:03:18.706301 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:03:18.706310 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:03:18.706320 | orchestrator | 2025-07-12 20:03:18.706330 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-12 20:03:18.706339 | orchestrator | Saturday 12 July 2025 20:03:17 +0000 (0:00:00.669) 0:00:23.317 ********* 2025-07-12 20:03:18.706349 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 20:03:18.706359 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 20:03:18.706368 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 20:03:18.706378 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 20:03:18.706387 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 20:03:18.706397 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 20:03:18.706406 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 20:03:18.706416 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 20:03:18.706426 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 20:03:18.706435 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 20:03:18.706444 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 20:03:18.706454 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 20:03:18.706463 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-07-12 20:03:18.706473 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-07-12 20:03:18.706491 | orchestrator | 2025-07-12 20:03:18.706511 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-07-12 20:03:36.735308 | orchestrator | Saturday 12 July 2025 20:03:18 +0000 (0:00:01.264) 0:00:24.581 ********* 2025-07-12 20:03:36.735403 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:03:36.735414 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:36.735422 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:36.735429 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:36.735436 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:03:36.735443 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:03:36.735450 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:03:36.735457 | orchestrator | 2025-07-12 20:03:36.735464 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-07-12 20:03:36.735472 | orchestrator | Saturday 12 July 2025 20:03:19 +0000 (0:00:00.668) 0:00:25.250 ********* 2025-07-12 20:03:36.735481 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-4, testbed-node-3, testbed-node-5 2025-07-12 20:03:36.735491 | orchestrator | 2025-07-12 20:03:36.735498 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-07-12 20:03:36.735504 | orchestrator | Saturday 12 July 2025 20:03:24 +0000 (0:00:04.856) 0:00:30.107 ********* 2025-07-12 20:03:36.735512 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735529 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735550 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735557 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735564 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735616 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735658 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735665 | orchestrator | 2025-07-12 20:03:36.735671 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-07-12 20:03:36.735678 | orchestrator | Saturday 12 July 2025 20:03:30 +0000 (0:00:06.389) 0:00:36.496 ********* 2025-07-12 20:03:36.735685 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735693 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735714 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735720 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735727 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-07-12 20:03:36.735746 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735753 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735767 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:36.735779 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:42.587367 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-07-12 20:03:42.587485 | orchestrator | 2025-07-12 20:03:42.587502 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-07-12 20:03:42.587516 | orchestrator | Saturday 12 July 2025 20:03:36 +0000 (0:00:06.114) 0:00:42.611 ********* 2025-07-12 20:03:42.587530 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:03:42.587542 | orchestrator | 2025-07-12 20:03:42.587553 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-07-12 20:03:42.587564 | orchestrator | Saturday 12 July 2025 20:03:37 +0000 (0:00:01.217) 0:00:43.828 ********* 2025-07-12 20:03:42.587575 | orchestrator | ok: [testbed-manager] 2025-07-12 20:03:42.587588 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:03:42.587598 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:03:42.587609 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:03:42.587620 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:03:42.587631 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:03:42.587642 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:03:42.587653 | orchestrator | 2025-07-12 20:03:42.587664 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-07-12 20:03:42.587675 | orchestrator | Saturday 12 July 2025 20:03:38 +0000 (0:00:00.947) 0:00:44.776 ********* 2025-07-12 20:03:42.587686 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 20:03:42.587697 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 20:03:42.587709 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 20:03:42.587721 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 20:03:42.587732 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 20:03:42.587743 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 20:03:42.587780 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 20:03:42.587792 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 20:03:42.587803 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:03:42.587815 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 20:03:42.587826 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 20:03:42.587837 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 20:03:42.587848 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 20:03:42.587861 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:42.587873 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 20:03:42.587886 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 20:03:42.587898 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 20:03:42.587910 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:42.587922 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 20:03:42.587935 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 20:03:42.587977 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 20:03:42.587990 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 20:03:42.588002 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 20:03:42.588015 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:42.588027 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 20:03:42.588039 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 20:03:42.588053 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 20:03:42.588065 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 20:03:42.588077 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:03:42.588090 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:03:42.588102 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-07-12 20:03:42.588115 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-07-12 20:03:42.588127 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-07-12 20:03:42.588139 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-07-12 20:03:42.588152 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:03:42.588164 | orchestrator | 2025-07-12 20:03:42.588178 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-07-12 20:03:42.588208 | orchestrator | Saturday 12 July 2025 20:03:40 +0000 (0:00:01.984) 0:00:46.761 ********* 2025-07-12 20:03:42.588219 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:03:42.588231 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:42.588242 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:42.588252 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:42.588263 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:03:42.588274 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:03:42.588285 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:03:42.588296 | orchestrator | 2025-07-12 20:03:42.588307 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-07-12 20:03:42.588318 | orchestrator | Saturday 12 July 2025 20:03:41 +0000 (0:00:00.788) 0:00:47.549 ********* 2025-07-12 20:03:42.588328 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:03:42.588348 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:03:42.588358 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:03:42.588369 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:03:42.588380 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:03:42.588391 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:03:42.588417 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:03:42.588428 | orchestrator | 2025-07-12 20:03:42.588450 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:03:42.588463 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:03:42.588475 | orchestrator | testbed-node-0 : ok=20  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:03:42.588486 | orchestrator | testbed-node-1 : ok=20  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:03:42.588497 | orchestrator | testbed-node-2 : ok=20  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:03:42.588508 | orchestrator | testbed-node-3 : ok=20  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:03:42.588519 | orchestrator | testbed-node-4 : ok=20  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:03:42.588530 | orchestrator | testbed-node-5 : ok=20  changed=4  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:03:42.588541 | orchestrator | 2025-07-12 20:03:42.588552 | orchestrator | 2025-07-12 20:03:42.588564 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:03:42.588575 | orchestrator | Saturday 12 July 2025 20:03:42 +0000 (0:00:00.539) 0:00:48.089 ********* 2025-07-12 20:03:42.588585 | orchestrator | =============================================================================== 2025-07-12 20:03:42.588596 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.39s 2025-07-12 20:03:42.588607 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 6.11s 2025-07-12 20:03:42.588618 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.86s 2025-07-12 20:03:42.588629 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.23s 2025-07-12 20:03:42.588640 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.24s 2025-07-12 20:03:42.588651 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.19s 2025-07-12 20:03:42.588661 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.98s 2025-07-12 20:03:42.588672 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.90s 2025-07-12 20:03:42.588683 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.68s 2025-07-12 20:03:42.588694 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.59s 2025-07-12 20:03:42.588705 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.44s 2025-07-12 20:03:42.588716 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.27s 2025-07-12 20:03:42.588726 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.26s 2025-07-12 20:03:42.588737 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.26s 2025-07-12 20:03:42.588748 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.23s 2025-07-12 20:03:42.588759 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.22s 2025-07-12 20:03:42.588780 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.12s 2025-07-12 20:03:42.588799 | orchestrator | osism.commons.network : Create required directories --------------------- 0.99s 2025-07-12 20:03:42.588810 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.95s 2025-07-12 20:03:42.588821 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.85s 2025-07-12 20:03:42.847099 | orchestrator | + osism apply wireguard 2025-07-12 20:03:54.731372 | orchestrator | 2025-07-12 20:03:54 | INFO  | Task f6d9ee0f-c316-4d7f-b2f1-2ea9fb39e0c5 (wireguard) was prepared for execution. 2025-07-12 20:03:54.731469 | orchestrator | 2025-07-12 20:03:54 | INFO  | It takes a moment until task f6d9ee0f-c316-4d7f-b2f1-2ea9fb39e0c5 (wireguard) has been started and output is visible here. 2025-07-12 20:04:14.116402 | orchestrator | 2025-07-12 20:04:14.116572 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-07-12 20:04:14.116605 | orchestrator | 2025-07-12 20:04:14.116627 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-07-12 20:04:14.116645 | orchestrator | Saturday 12 July 2025 20:03:58 +0000 (0:00:00.223) 0:00:00.223 ********* 2025-07-12 20:04:14.116662 | orchestrator | ok: [testbed-manager] 2025-07-12 20:04:14.116682 | orchestrator | 2025-07-12 20:04:14.116693 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-07-12 20:04:14.116705 | orchestrator | Saturday 12 July 2025 20:03:59 +0000 (0:00:01.533) 0:00:01.757 ********* 2025-07-12 20:04:14.116715 | orchestrator | changed: [testbed-manager] 2025-07-12 20:04:14.116728 | orchestrator | 2025-07-12 20:04:14.116739 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-07-12 20:04:14.116750 | orchestrator | Saturday 12 July 2025 20:04:06 +0000 (0:00:06.193) 0:00:07.950 ********* 2025-07-12 20:04:14.116761 | orchestrator | changed: [testbed-manager] 2025-07-12 20:04:14.116772 | orchestrator | 2025-07-12 20:04:14.116782 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-07-12 20:04:14.116793 | orchestrator | Saturday 12 July 2025 20:04:06 +0000 (0:00:00.583) 0:00:08.534 ********* 2025-07-12 20:04:14.116810 | orchestrator | changed: [testbed-manager] 2025-07-12 20:04:14.116828 | orchestrator | 2025-07-12 20:04:14.116911 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-07-12 20:04:14.116934 | orchestrator | Saturday 12 July 2025 20:04:07 +0000 (0:00:00.474) 0:00:09.008 ********* 2025-07-12 20:04:14.116981 | orchestrator | ok: [testbed-manager] 2025-07-12 20:04:14.117004 | orchestrator | 2025-07-12 20:04:14.117093 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-07-12 20:04:14.117108 | orchestrator | Saturday 12 July 2025 20:04:07 +0000 (0:00:00.526) 0:00:09.535 ********* 2025-07-12 20:04:14.117119 | orchestrator | ok: [testbed-manager] 2025-07-12 20:04:14.117165 | orchestrator | 2025-07-12 20:04:14.117187 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-07-12 20:04:14.117231 | orchestrator | Saturday 12 July 2025 20:04:08 +0000 (0:00:00.528) 0:00:10.063 ********* 2025-07-12 20:04:14.117243 | orchestrator | ok: [testbed-manager] 2025-07-12 20:04:14.117253 | orchestrator | 2025-07-12 20:04:14.117264 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-07-12 20:04:14.117275 | orchestrator | Saturday 12 July 2025 20:04:08 +0000 (0:00:00.416) 0:00:10.480 ********* 2025-07-12 20:04:14.117286 | orchestrator | changed: [testbed-manager] 2025-07-12 20:04:14.117296 | orchestrator | 2025-07-12 20:04:14.117307 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-07-12 20:04:14.117318 | orchestrator | Saturday 12 July 2025 20:04:09 +0000 (0:00:01.201) 0:00:11.681 ********* 2025-07-12 20:04:14.117329 | orchestrator | changed: [testbed-manager] => (item=None) 2025-07-12 20:04:14.117340 | orchestrator | changed: [testbed-manager] 2025-07-12 20:04:14.117351 | orchestrator | 2025-07-12 20:04:14.117362 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-07-12 20:04:14.117373 | orchestrator | Saturday 12 July 2025 20:04:10 +0000 (0:00:01.016) 0:00:12.697 ********* 2025-07-12 20:04:14.117415 | orchestrator | changed: [testbed-manager] 2025-07-12 20:04:14.117426 | orchestrator | 2025-07-12 20:04:14.117437 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-07-12 20:04:14.117448 | orchestrator | Saturday 12 July 2025 20:04:12 +0000 (0:00:01.825) 0:00:14.522 ********* 2025-07-12 20:04:14.117460 | orchestrator | changed: [testbed-manager] 2025-07-12 20:04:14.117471 | orchestrator | 2025-07-12 20:04:14.117481 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:04:14.117493 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:04:14.117504 | orchestrator | 2025-07-12 20:04:14.117515 | orchestrator | 2025-07-12 20:04:14.117526 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:04:14.117536 | orchestrator | Saturday 12 July 2025 20:04:13 +0000 (0:00:01.001) 0:00:15.524 ********* 2025-07-12 20:04:14.117547 | orchestrator | =============================================================================== 2025-07-12 20:04:14.117558 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.19s 2025-07-12 20:04:14.117568 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.83s 2025-07-12 20:04:14.117579 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.53s 2025-07-12 20:04:14.117590 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.20s 2025-07-12 20:04:14.117601 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 1.02s 2025-07-12 20:04:14.117611 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 1.00s 2025-07-12 20:04:14.117622 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.58s 2025-07-12 20:04:14.117633 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.53s 2025-07-12 20:04:14.117643 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.53s 2025-07-12 20:04:14.117654 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.47s 2025-07-12 20:04:14.117711 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.42s 2025-07-12 20:04:14.442215 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-07-12 20:04:14.469485 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-07-12 20:04:14.469604 | orchestrator | Dload Upload Total Spent Left Speed 2025-07-12 20:04:14.565271 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 156 0 --:--:-- --:--:-- --:--:-- 157 2025-07-12 20:04:14.582455 | orchestrator | + osism apply --environment custom workarounds 2025-07-12 20:04:16.544399 | orchestrator | 2025-07-12 20:04:16 | INFO  | Trying to run play workarounds in environment custom 2025-07-12 20:04:26.649778 | orchestrator | 2025-07-12 20:04:26 | INFO  | Task 3ac158c7-e744-4ce3-b8a6-545e2398a36c (workarounds) was prepared for execution. 2025-07-12 20:04:26.649909 | orchestrator | 2025-07-12 20:04:26 | INFO  | It takes a moment until task 3ac158c7-e744-4ce3-b8a6-545e2398a36c (workarounds) has been started and output is visible here. 2025-07-12 20:04:51.189903 | orchestrator | 2025-07-12 20:04:51.190047 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:04:51.190058 | orchestrator | 2025-07-12 20:04:51.190063 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-07-12 20:04:51.190069 | orchestrator | Saturday 12 July 2025 20:04:30 +0000 (0:00:00.136) 0:00:00.136 ********* 2025-07-12 20:04:51.190075 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-07-12 20:04:51.190080 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-07-12 20:04:51.190084 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-07-12 20:04:51.190106 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-07-12 20:04:51.190111 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-07-12 20:04:51.190116 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-07-12 20:04:51.190120 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-07-12 20:04:51.190125 | orchestrator | 2025-07-12 20:04:51.190129 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-07-12 20:04:51.190133 | orchestrator | 2025-07-12 20:04:51.190138 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-12 20:04:51.190142 | orchestrator | Saturday 12 July 2025 20:04:31 +0000 (0:00:00.774) 0:00:00.910 ********* 2025-07-12 20:04:51.190147 | orchestrator | ok: [testbed-manager] 2025-07-12 20:04:51.190152 | orchestrator | 2025-07-12 20:04:51.190157 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-07-12 20:04:51.190161 | orchestrator | 2025-07-12 20:04:51.190165 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-07-12 20:04:51.190170 | orchestrator | Saturday 12 July 2025 20:04:33 +0000 (0:00:02.059) 0:00:02.969 ********* 2025-07-12 20:04:51.190174 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:04:51.190178 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:04:51.190183 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:04:51.190187 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:04:51.190191 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:04:51.190195 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:04:51.190200 | orchestrator | 2025-07-12 20:04:51.190204 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-07-12 20:04:51.190208 | orchestrator | 2025-07-12 20:04:51.190213 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-07-12 20:04:51.190217 | orchestrator | Saturday 12 July 2025 20:04:35 +0000 (0:00:01.796) 0:00:04.766 ********* 2025-07-12 20:04:51.190222 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 20:04:51.190228 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 20:04:51.190232 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 20:04:51.190237 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 20:04:51.190241 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 20:04:51.190245 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-07-12 20:04:51.190249 | orchestrator | 2025-07-12 20:04:51.190254 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-07-12 20:04:51.190258 | orchestrator | Saturday 12 July 2025 20:04:36 +0000 (0:00:01.504) 0:00:06.271 ********* 2025-07-12 20:04:51.190263 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:04:51.190267 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:04:51.190272 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:04:51.190276 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:04:51.190280 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:04:51.190284 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:04:51.190289 | orchestrator | 2025-07-12 20:04:51.190293 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-07-12 20:04:51.190298 | orchestrator | Saturday 12 July 2025 20:04:40 +0000 (0:00:04.014) 0:00:10.285 ********* 2025-07-12 20:04:51.190302 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:04:51.190306 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:04:51.190311 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:04:51.190315 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:04:51.190319 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:04:51.190324 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:04:51.190332 | orchestrator | 2025-07-12 20:04:51.190336 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-07-12 20:04:51.190341 | orchestrator | 2025-07-12 20:04:51.190345 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-07-12 20:04:51.190350 | orchestrator | Saturday 12 July 2025 20:04:41 +0000 (0:00:00.728) 0:00:11.014 ********* 2025-07-12 20:04:51.190354 | orchestrator | changed: [testbed-manager] 2025-07-12 20:04:51.190358 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:04:51.190363 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:04:51.190367 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:04:51.190371 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:04:51.190376 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:04:51.190380 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:04:51.190384 | orchestrator | 2025-07-12 20:04:51.190400 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-07-12 20:04:51.190405 | orchestrator | Saturday 12 July 2025 20:04:43 +0000 (0:00:01.662) 0:00:12.677 ********* 2025-07-12 20:04:51.190409 | orchestrator | changed: [testbed-manager] 2025-07-12 20:04:51.190413 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:04:51.190418 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:04:51.190422 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:04:51.190426 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:04:51.190431 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:04:51.190446 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:04:51.190451 | orchestrator | 2025-07-12 20:04:51.190456 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-07-12 20:04:51.190461 | orchestrator | Saturday 12 July 2025 20:04:44 +0000 (0:00:01.627) 0:00:14.305 ********* 2025-07-12 20:04:51.190466 | orchestrator | ok: [testbed-manager] 2025-07-12 20:04:51.190471 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:04:51.190476 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:04:51.190481 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:04:51.190486 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:04:51.190490 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:04:51.190495 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:04:51.190500 | orchestrator | 2025-07-12 20:04:51.190505 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-07-12 20:04:51.190510 | orchestrator | Saturday 12 July 2025 20:04:46 +0000 (0:00:01.487) 0:00:15.792 ********* 2025-07-12 20:04:51.190514 | orchestrator | changed: [testbed-manager] 2025-07-12 20:04:51.190519 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:04:51.190524 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:04:51.190529 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:04:51.190534 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:04:51.190539 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:04:51.190544 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:04:51.190549 | orchestrator | 2025-07-12 20:04:51.190554 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-07-12 20:04:51.190559 | orchestrator | Saturday 12 July 2025 20:04:48 +0000 (0:00:01.775) 0:00:17.568 ********* 2025-07-12 20:04:51.190564 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:04:51.190568 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:04:51.190573 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:04:51.190578 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:04:51.190583 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:04:51.190588 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:04:51.190592 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:04:51.190597 | orchestrator | 2025-07-12 20:04:51.190602 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-07-12 20:04:51.190607 | orchestrator | 2025-07-12 20:04:51.190612 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-07-12 20:04:51.190617 | orchestrator | Saturday 12 July 2025 20:04:48 +0000 (0:00:00.606) 0:00:18.175 ********* 2025-07-12 20:04:51.190626 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:04:51.190631 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:04:51.190636 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:04:51.190641 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:04:51.190646 | orchestrator | ok: [testbed-manager] 2025-07-12 20:04:51.190651 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:04:51.190656 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:04:51.190661 | orchestrator | 2025-07-12 20:04:51.190666 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:04:51.190671 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:04:51.190678 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:04:51.190683 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:04:51.190688 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:04:51.190693 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:04:51.190698 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:04:51.190703 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:04:51.190708 | orchestrator | 2025-07-12 20:04:51.190713 | orchestrator | 2025-07-12 20:04:51.190718 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:04:51.190723 | orchestrator | Saturday 12 July 2025 20:04:51 +0000 (0:00:02.488) 0:00:20.663 ********* 2025-07-12 20:04:51.190728 | orchestrator | =============================================================================== 2025-07-12 20:04:51.190732 | orchestrator | Run update-ca-certificates ---------------------------------------------- 4.01s 2025-07-12 20:04:51.190737 | orchestrator | Install python3-docker -------------------------------------------------- 2.49s 2025-07-12 20:04:51.190742 | orchestrator | Apply netplan configuration --------------------------------------------- 2.06s 2025-07-12 20:04:51.190747 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-07-12 20:04:51.190752 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.78s 2025-07-12 20:04:51.190757 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.66s 2025-07-12 20:04:51.190765 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.63s 2025-07-12 20:04:51.190770 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.50s 2025-07-12 20:04:51.190775 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-07-12 20:04:51.190780 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.77s 2025-07-12 20:04:51.190785 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.73s 2025-07-12 20:04:51.190792 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.61s 2025-07-12 20:04:51.770909 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-07-12 20:05:03.830288 | orchestrator | 2025-07-12 20:05:03 | INFO  | Task 95e9478e-798e-4998-9909-b8558473968a (reboot) was prepared for execution. 2025-07-12 20:05:03.830369 | orchestrator | 2025-07-12 20:05:03 | INFO  | It takes a moment until task 95e9478e-798e-4998-9909-b8558473968a (reboot) has been started and output is visible here. 2025-07-12 20:05:14.079548 | orchestrator | 2025-07-12 20:05:14.079726 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 20:05:14.079756 | orchestrator | 2025-07-12 20:05:14.079777 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 20:05:14.079795 | orchestrator | Saturday 12 July 2025 20:05:07 +0000 (0:00:00.213) 0:00:00.213 ********* 2025-07-12 20:05:14.079813 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:14.079833 | orchestrator | 2025-07-12 20:05:14.079851 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 20:05:14.079871 | orchestrator | Saturday 12 July 2025 20:05:08 +0000 (0:00:00.103) 0:00:00.316 ********* 2025-07-12 20:05:14.079890 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:05:14.079910 | orchestrator | 2025-07-12 20:05:14.079929 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 20:05:14.079949 | orchestrator | Saturday 12 July 2025 20:05:08 +0000 (0:00:00.954) 0:00:01.270 ********* 2025-07-12 20:05:14.080008 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:05:14.080028 | orchestrator | 2025-07-12 20:05:14.080047 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 20:05:14.080067 | orchestrator | 2025-07-12 20:05:14.080086 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 20:05:14.080106 | orchestrator | Saturday 12 July 2025 20:05:09 +0000 (0:00:00.116) 0:00:01.387 ********* 2025-07-12 20:05:14.080126 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:14.080145 | orchestrator | 2025-07-12 20:05:14.080164 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 20:05:14.080183 | orchestrator | Saturday 12 July 2025 20:05:09 +0000 (0:00:00.095) 0:00:01.483 ********* 2025-07-12 20:05:14.080201 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:05:14.080219 | orchestrator | 2025-07-12 20:05:14.080238 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 20:05:14.080255 | orchestrator | Saturday 12 July 2025 20:05:09 +0000 (0:00:00.703) 0:00:02.187 ********* 2025-07-12 20:05:14.080273 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:05:14.080290 | orchestrator | 2025-07-12 20:05:14.080308 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 20:05:14.080326 | orchestrator | 2025-07-12 20:05:14.080343 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 20:05:14.080362 | orchestrator | Saturday 12 July 2025 20:05:10 +0000 (0:00:00.106) 0:00:02.293 ********* 2025-07-12 20:05:14.080379 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:14.080397 | orchestrator | 2025-07-12 20:05:14.080415 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 20:05:14.080433 | orchestrator | Saturday 12 July 2025 20:05:10 +0000 (0:00:00.192) 0:00:02.486 ********* 2025-07-12 20:05:14.080452 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:05:14.080472 | orchestrator | 2025-07-12 20:05:14.080490 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 20:05:14.080507 | orchestrator | Saturday 12 July 2025 20:05:10 +0000 (0:00:00.664) 0:00:03.151 ********* 2025-07-12 20:05:14.080524 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:05:14.080542 | orchestrator | 2025-07-12 20:05:14.080561 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 20:05:14.080580 | orchestrator | 2025-07-12 20:05:14.080599 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 20:05:14.080616 | orchestrator | Saturday 12 July 2025 20:05:10 +0000 (0:00:00.109) 0:00:03.260 ********* 2025-07-12 20:05:14.080634 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:14.080653 | orchestrator | 2025-07-12 20:05:14.080673 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 20:05:14.080690 | orchestrator | Saturday 12 July 2025 20:05:11 +0000 (0:00:00.108) 0:00:03.369 ********* 2025-07-12 20:05:14.080708 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:05:14.080720 | orchestrator | 2025-07-12 20:05:14.080731 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 20:05:14.080759 | orchestrator | Saturday 12 July 2025 20:05:11 +0000 (0:00:00.706) 0:00:04.075 ********* 2025-07-12 20:05:14.080770 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:05:14.080781 | orchestrator | 2025-07-12 20:05:14.080792 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 20:05:14.080803 | orchestrator | 2025-07-12 20:05:14.080814 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 20:05:14.080825 | orchestrator | Saturday 12 July 2025 20:05:11 +0000 (0:00:00.122) 0:00:04.197 ********* 2025-07-12 20:05:14.080835 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:14.080846 | orchestrator | 2025-07-12 20:05:14.080857 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 20:05:14.080868 | orchestrator | Saturday 12 July 2025 20:05:12 +0000 (0:00:00.111) 0:00:04.309 ********* 2025-07-12 20:05:14.080879 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:05:14.080889 | orchestrator | 2025-07-12 20:05:14.080906 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 20:05:14.080937 | orchestrator | Saturday 12 July 2025 20:05:12 +0000 (0:00:00.689) 0:00:04.999 ********* 2025-07-12 20:05:14.080949 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:05:14.080960 | orchestrator | 2025-07-12 20:05:14.081005 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-07-12 20:05:14.081022 | orchestrator | 2025-07-12 20:05:14.081034 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-07-12 20:05:14.081045 | orchestrator | Saturday 12 July 2025 20:05:12 +0000 (0:00:00.119) 0:00:05.118 ********* 2025-07-12 20:05:14.081055 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:14.081066 | orchestrator | 2025-07-12 20:05:14.081077 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-07-12 20:05:14.081088 | orchestrator | Saturday 12 July 2025 20:05:12 +0000 (0:00:00.103) 0:00:05.221 ********* 2025-07-12 20:05:14.081099 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:05:14.081110 | orchestrator | 2025-07-12 20:05:14.081121 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-07-12 20:05:14.081132 | orchestrator | Saturday 12 July 2025 20:05:13 +0000 (0:00:00.767) 0:00:05.989 ********* 2025-07-12 20:05:14.081180 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:05:14.081192 | orchestrator | 2025-07-12 20:05:14.081204 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:05:14.081216 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:05:14.081230 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:05:14.081249 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:05:14.081261 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:05:14.081272 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:05:14.081282 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:05:14.081293 | orchestrator | 2025-07-12 20:05:14.081304 | orchestrator | 2025-07-12 20:05:14.081315 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:05:14.081326 | orchestrator | Saturday 12 July 2025 20:05:13 +0000 (0:00:00.040) 0:00:06.029 ********* 2025-07-12 20:05:14.081336 | orchestrator | =============================================================================== 2025-07-12 20:05:14.081355 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.49s 2025-07-12 20:05:14.081366 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.71s 2025-07-12 20:05:14.081377 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2025-07-12 20:05:14.363942 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-07-12 20:05:26.294385 | orchestrator | 2025-07-12 20:05:26 | INFO  | Task 2afe2d49-b463-44de-8822-1b757245bd3f (wait-for-connection) was prepared for execution. 2025-07-12 20:05:26.294523 | orchestrator | 2025-07-12 20:05:26 | INFO  | It takes a moment until task 2afe2d49-b463-44de-8822-1b757245bd3f (wait-for-connection) has been started and output is visible here. 2025-07-12 20:05:42.196893 | orchestrator | 2025-07-12 20:05:42.197014 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-07-12 20:05:42.197031 | orchestrator | 2025-07-12 20:05:42.197043 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-07-12 20:05:42.197055 | orchestrator | Saturday 12 July 2025 20:05:30 +0000 (0:00:00.244) 0:00:00.244 ********* 2025-07-12 20:05:42.197066 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:05:42.197078 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:05:42.197089 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:05:42.197100 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:05:42.197110 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:05:42.197121 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:05:42.197132 | orchestrator | 2025-07-12 20:05:42.197143 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:05:42.197155 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:05:42.197168 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:05:42.197179 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:05:42.197190 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:05:42.197201 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:05:42.197232 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:05:42.197244 | orchestrator | 2025-07-12 20:05:42.197255 | orchestrator | 2025-07-12 20:05:42.197267 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:05:42.197278 | orchestrator | Saturday 12 July 2025 20:05:41 +0000 (0:00:11.631) 0:00:11.875 ********* 2025-07-12 20:05:42.197289 | orchestrator | =============================================================================== 2025-07-12 20:05:42.197299 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.63s 2025-07-12 20:05:42.455845 | orchestrator | + osism apply hddtemp 2025-07-12 20:05:54.366551 | orchestrator | 2025-07-12 20:05:54 | INFO  | Task 6478140a-4cad-436e-8cda-e20ce233647d (hddtemp) was prepared for execution. 2025-07-12 20:05:54.366667 | orchestrator | 2025-07-12 20:05:54 | INFO  | It takes a moment until task 6478140a-4cad-436e-8cda-e20ce233647d (hddtemp) has been started and output is visible here. 2025-07-12 20:06:22.152115 | orchestrator | 2025-07-12 20:06:22.152210 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-07-12 20:06:22.152220 | orchestrator | 2025-07-12 20:06:22.152225 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-07-12 20:06:22.152231 | orchestrator | Saturday 12 July 2025 20:05:58 +0000 (0:00:00.253) 0:00:00.253 ********* 2025-07-12 20:06:22.152235 | orchestrator | ok: [testbed-manager] 2025-07-12 20:06:22.152257 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:22.152261 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:22.152265 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:22.152269 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:06:22.152273 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:06:22.152276 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:06:22.152280 | orchestrator | 2025-07-12 20:06:22.152284 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-07-12 20:06:22.152288 | orchestrator | Saturday 12 July 2025 20:05:59 +0000 (0:00:00.743) 0:00:00.997 ********* 2025-07-12 20:06:22.152294 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:06:22.152300 | orchestrator | 2025-07-12 20:06:22.152304 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-07-12 20:06:22.152308 | orchestrator | Saturday 12 July 2025 20:06:00 +0000 (0:00:01.196) 0:00:02.194 ********* 2025-07-12 20:06:22.152312 | orchestrator | ok: [testbed-manager] 2025-07-12 20:06:22.152315 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:22.152319 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:22.152323 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:22.152326 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:06:22.152330 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:06:22.152334 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:06:22.152338 | orchestrator | 2025-07-12 20:06:22.152342 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-07-12 20:06:22.152346 | orchestrator | Saturday 12 July 2025 20:06:02 +0000 (0:00:01.846) 0:00:04.040 ********* 2025-07-12 20:06:22.152349 | orchestrator | changed: [testbed-manager] 2025-07-12 20:06:22.152354 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:22.152358 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:22.152361 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:22.152365 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:06:22.152369 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:06:22.152372 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:06:22.152376 | orchestrator | 2025-07-12 20:06:22.152380 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-07-12 20:06:22.152384 | orchestrator | Saturday 12 July 2025 20:06:03 +0000 (0:00:01.154) 0:00:05.195 ********* 2025-07-12 20:06:22.152387 | orchestrator | ok: [testbed-manager] 2025-07-12 20:06:22.152391 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:06:22.152395 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:06:22.152398 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:06:22.152402 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:06:22.152406 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:06:22.152409 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:06:22.152413 | orchestrator | 2025-07-12 20:06:22.152417 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-07-12 20:06:22.152421 | orchestrator | Saturday 12 July 2025 20:06:05 +0000 (0:00:01.654) 0:00:06.849 ********* 2025-07-12 20:06:22.152424 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:22.152428 | orchestrator | changed: [testbed-manager] 2025-07-12 20:06:22.152432 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:22.152435 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:22.152439 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:06:22.152443 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:06:22.152446 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:06:22.152450 | orchestrator | 2025-07-12 20:06:22.152454 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-07-12 20:06:22.152458 | orchestrator | Saturday 12 July 2025 20:06:06 +0000 (0:00:01.205) 0:00:08.055 ********* 2025-07-12 20:06:22.152462 | orchestrator | changed: [testbed-manager] 2025-07-12 20:06:22.152466 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:06:22.152473 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:22.152477 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:22.152481 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:22.152484 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:06:22.152488 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:06:22.152492 | orchestrator | 2025-07-12 20:06:22.152496 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-07-12 20:06:22.152499 | orchestrator | Saturday 12 July 2025 20:06:18 +0000 (0:00:12.411) 0:00:20.466 ********* 2025-07-12 20:06:22.152503 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:06:22.152507 | orchestrator | 2025-07-12 20:06:22.152511 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-07-12 20:06:22.152515 | orchestrator | Saturday 12 July 2025 20:06:19 +0000 (0:00:01.243) 0:00:21.709 ********* 2025-07-12 20:06:22.152519 | orchestrator | changed: [testbed-manager] 2025-07-12 20:06:22.152523 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:06:22.152527 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:06:22.152530 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:06:22.152534 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:06:22.152538 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:06:22.152542 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:06:22.152545 | orchestrator | 2025-07-12 20:06:22.152549 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:06:22.152553 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:06:22.152568 | orchestrator | testbed-node-0 : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:06:22.152572 | orchestrator | testbed-node-1 : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:06:22.152576 | orchestrator | testbed-node-2 : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:06:22.152580 | orchestrator | testbed-node-3 : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:06:22.152583 | orchestrator | testbed-node-4 : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:06:22.152587 | orchestrator | testbed-node-5 : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:06:22.152591 | orchestrator | 2025-07-12 20:06:22.152595 | orchestrator | 2025-07-12 20:06:22.152612 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:06:22.152616 | orchestrator | Saturday 12 July 2025 20:06:21 +0000 (0:00:01.878) 0:00:23.587 ********* 2025-07-12 20:06:22.152620 | orchestrator | =============================================================================== 2025-07-12 20:06:22.152623 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.41s 2025-07-12 20:06:22.152627 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.88s 2025-07-12 20:06:22.152631 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.85s 2025-07-12 20:06:22.152635 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.65s 2025-07-12 20:06:22.152638 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.24s 2025-07-12 20:06:22.152642 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 1.21s 2025-07-12 20:06:22.152646 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.20s 2025-07-12 20:06:22.152654 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.15s 2025-07-12 20:06:22.152658 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.74s 2025-07-12 20:06:22.450486 | orchestrator | ++ semver 9.2.0 7.1.1 2025-07-12 20:06:22.503531 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 20:06:22.503581 | orchestrator | + sudo systemctl restart manager.service 2025-07-12 20:06:36.113568 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 20:06:36.113708 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-07-12 20:06:36.113727 | orchestrator | + local max_attempts=60 2025-07-12 20:06:36.113741 | orchestrator | + local name=ceph-ansible 2025-07-12 20:06:36.113754 | orchestrator | + local attempt_num=1 2025-07-12 20:06:36.113766 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:06:36.155113 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 20:06:36.155214 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:06:36.155229 | orchestrator | + sleep 5 2025-07-12 20:06:41.160244 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:06:41.205265 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 20:06:41.205358 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:06:41.205369 | orchestrator | + sleep 5 2025-07-12 20:06:46.208465 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:06:46.246323 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 20:06:46.246433 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:06:46.246456 | orchestrator | + sleep 5 2025-07-12 20:06:51.250764 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:06:51.293754 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 20:06:51.293874 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:06:51.293896 | orchestrator | + sleep 5 2025-07-12 20:06:56.301724 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:06:56.339836 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 20:06:56.340134 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:06:56.340156 | orchestrator | + sleep 5 2025-07-12 20:07:01.345248 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:07:01.385628 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:01.385719 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:07:01.385729 | orchestrator | + sleep 5 2025-07-12 20:07:06.390459 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:07:06.435035 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:06.435136 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:07:06.435152 | orchestrator | + sleep 5 2025-07-12 20:07:11.441451 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:07:11.489333 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:11.489436 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:07:11.489453 | orchestrator | + sleep 5 2025-07-12 20:07:16.491826 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:07:16.523842 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:16.523941 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:07:16.523957 | orchestrator | + sleep 5 2025-07-12 20:07:21.527518 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:07:21.571805 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:21.571883 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:07:21.571894 | orchestrator | + sleep 5 2025-07-12 20:07:26.578404 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:07:26.618360 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:26.618455 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:07:26.618469 | orchestrator | + sleep 5 2025-07-12 20:07:31.622526 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:07:31.662239 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:31.662333 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:07:31.662349 | orchestrator | + sleep 5 2025-07-12 20:07:36.666631 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:07:36.707807 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:36.707937 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-07-12 20:07:36.707955 | orchestrator | + sleep 5 2025-07-12 20:07:41.712745 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-07-12 20:07:41.758390 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:41.758483 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-07-12 20:07:41.758507 | orchestrator | + local max_attempts=60 2025-07-12 20:07:41.758528 | orchestrator | + local name=kolla-ansible 2025-07-12 20:07:41.758546 | orchestrator | + local attempt_num=1 2025-07-12 20:07:41.758918 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-07-12 20:07:41.798333 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:41.798417 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-07-12 20:07:41.798428 | orchestrator | + local max_attempts=60 2025-07-12 20:07:41.798437 | orchestrator | + local name=osism-ansible 2025-07-12 20:07:41.798446 | orchestrator | + local attempt_num=1 2025-07-12 20:07:41.798643 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-07-12 20:07:41.831275 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-07-12 20:07:41.831373 | orchestrator | + [[ true == \t\r\u\e ]] 2025-07-12 20:07:41.831388 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-07-12 20:07:42.006612 | orchestrator | ARA in ceph-ansible already disabled. 2025-07-12 20:07:42.171778 | orchestrator | ARA in kolla-ansible already disabled. 2025-07-12 20:07:42.330574 | orchestrator | ARA in osism-ansible already disabled. 2025-07-12 20:07:42.467383 | orchestrator | ARA in osism-kubernetes already disabled. 2025-07-12 20:07:42.467813 | orchestrator | + osism apply gather-facts 2025-07-12 20:07:54.548364 | orchestrator | 2025-07-12 20:07:54 | INFO  | Task 1ca63b58-d54a-45b5-8466-893b3402d695 (gather-facts) was prepared for execution. 2025-07-12 20:07:54.548486 | orchestrator | 2025-07-12 20:07:54 | INFO  | It takes a moment until task 1ca63b58-d54a-45b5-8466-893b3402d695 (gather-facts) has been started and output is visible here. 2025-07-12 20:08:07.861465 | orchestrator | 2025-07-12 20:08:07.861588 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 20:08:07.861607 | orchestrator | 2025-07-12 20:08:07.861619 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 20:08:07.861631 | orchestrator | Saturday 12 July 2025 20:07:58 +0000 (0:00:00.196) 0:00:00.196 ********* 2025-07-12 20:08:07.861642 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:08:07.861654 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:08:07.861665 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:08:07.861677 | orchestrator | ok: [testbed-manager] 2025-07-12 20:08:07.861687 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:08:07.861720 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:08:07.861731 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:08:07.861742 | orchestrator | 2025-07-12 20:08:07.861753 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 20:08:07.861764 | orchestrator | 2025-07-12 20:08:07.861776 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 20:08:07.861787 | orchestrator | Saturday 12 July 2025 20:08:06 +0000 (0:00:08.313) 0:00:08.509 ********* 2025-07-12 20:08:07.861798 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:08:07.861810 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:08:07.861821 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:08:07.861832 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:08:07.861843 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:08:07.861854 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:08:07.861865 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:08:07.861875 | orchestrator | 2025-07-12 20:08:07.861886 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:08:07.861898 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:08:07.861910 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:08:07.861950 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:08:07.861962 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:08:07.861973 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:08:07.861983 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:08:07.861994 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:08:07.862136 | orchestrator | 2025-07-12 20:08:07.862155 | orchestrator | 2025-07-12 20:08:07.862166 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:08:07.862177 | orchestrator | Saturday 12 July 2025 20:08:07 +0000 (0:00:00.585) 0:00:09.095 ********* 2025-07-12 20:08:07.862204 | orchestrator | =============================================================================== 2025-07-12 20:08:07.862216 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.31s 2025-07-12 20:08:07.862226 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.59s 2025-07-12 20:08:08.158612 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-07-12 20:08:08.170358 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-07-12 20:08:08.180089 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-07-12 20:08:08.190277 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-07-12 20:08:08.204540 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-07-12 20:08:08.214773 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-07-12 20:08:08.227792 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-07-12 20:08:08.245307 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-07-12 20:08:08.255707 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-07-12 20:08:08.267098 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-07-12 20:08:08.282906 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-07-12 20:08:08.308431 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-07-12 20:08:08.327807 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-07-12 20:08:08.348852 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-07-12 20:08:08.364309 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-07-12 20:08:08.384259 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-07-12 20:08:08.402184 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-07-12 20:08:08.417867 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-07-12 20:08:08.435209 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-07-12 20:08:08.456853 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-07-12 20:08:08.475932 | orchestrator | + [[ false == \t\r\u\e ]] 2025-07-12 20:08:08.659739 | orchestrator | ok: Runtime: 0:19:21.954804 2025-07-12 20:08:08.751896 | 2025-07-12 20:08:08.752006 | TASK [Deploy services] 2025-07-12 20:08:09.283524 | orchestrator | skipping: Conditional result was False 2025-07-12 20:08:09.300246 | 2025-07-12 20:08:09.300452 | TASK [Deploy in a nutshell] 2025-07-12 20:08:09.987210 | orchestrator | + set -e 2025-07-12 20:08:09.987321 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 20:08:09.987331 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 20:08:09.987340 | orchestrator | ++ INTERACTIVE=false 2025-07-12 20:08:09.987349 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 20:08:09.987354 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 20:08:09.987414 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 20:08:09.989700 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 20:08:09.989732 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 20:08:09.989740 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 20:08:09.989748 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 20:08:09.989754 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 20:08:09.989765 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 20:08:09.989779 | orchestrator | 2025-07-12 20:08:09.989785 | orchestrator | # PULL IMAGES 2025-07-12 20:08:09.989790 | orchestrator | 2025-07-12 20:08:09.989795 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 20:08:09.989803 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 20:08:09.989809 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 20:08:09.989815 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 20:08:09.989820 | orchestrator | ++ export ARA=false 2025-07-12 20:08:09.989825 | orchestrator | ++ ARA=false 2025-07-12 20:08:09.989830 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 20:08:09.989836 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 20:08:09.989841 | orchestrator | ++ export TEMPEST=false 2025-07-12 20:08:09.989846 | orchestrator | ++ TEMPEST=false 2025-07-12 20:08:09.989851 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 20:08:09.989856 | orchestrator | ++ IS_ZUUL=true 2025-07-12 20:08:09.989861 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 20:08:09.989866 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 20:08:09.989871 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 20:08:09.989876 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 20:08:09.989881 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 20:08:09.989887 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 20:08:09.989892 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 20:08:09.989897 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 20:08:09.989902 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 20:08:09.989912 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 20:08:09.989917 | orchestrator | + echo 2025-07-12 20:08:09.989922 | orchestrator | + echo '# PULL IMAGES' 2025-07-12 20:08:09.989928 | orchestrator | + echo 2025-07-12 20:08:09.989933 | orchestrator | ++ semver 9.2.0 7.0.0 2025-07-12 20:08:10.050378 | orchestrator | + [[ 1 -ge 0 ]] 2025-07-12 20:08:10.050446 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-07-12 20:08:11.893687 | orchestrator | 2025-07-12 20:08:11 | INFO  | Trying to run play pull-images in environment custom 2025-07-12 20:08:22.122297 | orchestrator | 2025-07-12 20:08:22 | INFO  | Task 25e2b18a-7f3c-4ddf-af70-025edfb1b11b (pull-images) was prepared for execution. 2025-07-12 20:08:22.122420 | orchestrator | 2025-07-12 20:08:22 | INFO  | Task 25e2b18a-7f3c-4ddf-af70-025edfb1b11b is running in background. No more output. Check ARA for logs. 2025-07-12 20:08:24.306786 | orchestrator | 2025-07-12 20:08:24 | INFO  | Trying to run play wipe-partitions in environment custom 2025-07-12 20:08:34.390975 | orchestrator | 2025-07-12 20:08:34 | INFO  | Task acd5eb95-bdba-432c-a202-104229df3a3d (wipe-partitions) was prepared for execution. 2025-07-12 20:08:34.391150 | orchestrator | 2025-07-12 20:08:34 | INFO  | It takes a moment until task acd5eb95-bdba-432c-a202-104229df3a3d (wipe-partitions) has been started and output is visible here. 2025-07-12 20:08:46.832377 | orchestrator | 2025-07-12 20:08:46.832501 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-07-12 20:08:46.832519 | orchestrator | 2025-07-12 20:08:46.832532 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-07-12 20:08:46.832549 | orchestrator | Saturday 12 July 2025 20:08:38 +0000 (0:00:00.150) 0:00:00.150 ********* 2025-07-12 20:08:46.832560 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:08:46.832572 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:08:46.832584 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:08:46.832595 | orchestrator | 2025-07-12 20:08:46.832606 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-07-12 20:08:46.832645 | orchestrator | Saturday 12 July 2025 20:08:39 +0000 (0:00:00.573) 0:00:00.724 ********* 2025-07-12 20:08:46.832657 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:08:46.832667 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:08:46.832678 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:08:46.832694 | orchestrator | 2025-07-12 20:08:46.832705 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-07-12 20:08:46.832716 | orchestrator | Saturday 12 July 2025 20:08:39 +0000 (0:00:00.285) 0:00:01.010 ********* 2025-07-12 20:08:46.832728 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:08:46.832739 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:08:46.832750 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:08:46.832761 | orchestrator | 2025-07-12 20:08:46.832772 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-07-12 20:08:46.832783 | orchestrator | Saturday 12 July 2025 20:08:40 +0000 (0:00:00.697) 0:00:01.707 ********* 2025-07-12 20:08:46.832794 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:08:46.832805 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:08:46.832815 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:08:46.832826 | orchestrator | 2025-07-12 20:08:46.832837 | orchestrator | TASK [Check device availability] *********************************************** 2025-07-12 20:08:46.832848 | orchestrator | Saturday 12 July 2025 20:08:40 +0000 (0:00:00.248) 0:00:01.956 ********* 2025-07-12 20:08:46.832859 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 20:08:46.832874 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 20:08:46.832885 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 20:08:46.832898 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 20:08:46.832911 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 20:08:46.832923 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 20:08:46.832936 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 20:08:46.832949 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 20:08:46.832962 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 20:08:46.832974 | orchestrator | 2025-07-12 20:08:46.832986 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-07-12 20:08:46.832999 | orchestrator | Saturday 12 July 2025 20:08:41 +0000 (0:00:00.989) 0:00:02.946 ********* 2025-07-12 20:08:46.833045 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 20:08:46.833058 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 20:08:46.833071 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 20:08:46.833083 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 20:08:46.833096 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 20:08:46.833108 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 20:08:46.833120 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 20:08:46.833133 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 20:08:46.833145 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 20:08:46.833157 | orchestrator | 2025-07-12 20:08:46.833169 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-07-12 20:08:46.833181 | orchestrator | Saturday 12 July 2025 20:08:42 +0000 (0:00:01.333) 0:00:04.279 ********* 2025-07-12 20:08:46.833194 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-07-12 20:08:46.833206 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-07-12 20:08:46.833218 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-07-12 20:08:46.833232 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-07-12 20:08:46.833245 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-07-12 20:08:46.833256 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-07-12 20:08:46.833267 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-07-12 20:08:46.833278 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-07-12 20:08:46.833305 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-07-12 20:08:46.833317 | orchestrator | 2025-07-12 20:08:46.833328 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-07-12 20:08:46.833339 | orchestrator | Saturday 12 July 2025 20:08:45 +0000 (0:00:02.334) 0:00:06.614 ********* 2025-07-12 20:08:46.833349 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:08:46.833360 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:08:46.833371 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:08:46.833381 | orchestrator | 2025-07-12 20:08:46.833392 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-07-12 20:08:46.833403 | orchestrator | Saturday 12 July 2025 20:08:45 +0000 (0:00:00.607) 0:00:07.221 ********* 2025-07-12 20:08:46.833414 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:08:46.833424 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:08:46.833435 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:08:46.833446 | orchestrator | 2025-07-12 20:08:46.833456 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:08:46.833469 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:08:46.833482 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:08:46.833511 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:08:46.833522 | orchestrator | 2025-07-12 20:08:46.833533 | orchestrator | 2025-07-12 20:08:46.833544 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:08:46.833554 | orchestrator | Saturday 12 July 2025 20:08:46 +0000 (0:00:00.664) 0:00:07.886 ********* 2025-07-12 20:08:46.833565 | orchestrator | =============================================================================== 2025-07-12 20:08:46.833576 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.33s 2025-07-12 20:08:46.833587 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.33s 2025-07-12 20:08:46.833597 | orchestrator | Check device availability ----------------------------------------------- 0.99s 2025-07-12 20:08:46.833608 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.70s 2025-07-12 20:08:46.833619 | orchestrator | Request device events from the kernel ----------------------------------- 0.66s 2025-07-12 20:08:46.833630 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2025-07-12 20:08:46.833640 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-07-12 20:08:46.833651 | orchestrator | Remove all rook related logical devices --------------------------------- 0.29s 2025-07-12 20:08:46.833662 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.25s 2025-07-12 20:08:58.972780 | orchestrator | 2025-07-12 20:08:58 | INFO  | Task 25b440b9-9eb2-405d-aa00-3d2fe8b0e72a (facts) was prepared for execution. 2025-07-12 20:08:58.972889 | orchestrator | 2025-07-12 20:08:58 | INFO  | It takes a moment until task 25b440b9-9eb2-405d-aa00-3d2fe8b0e72a (facts) has been started and output is visible here. 2025-07-12 20:09:10.848820 | orchestrator | 2025-07-12 20:09:10.848920 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 20:09:10.848931 | orchestrator | 2025-07-12 20:09:10.848939 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 20:09:10.848947 | orchestrator | Saturday 12 July 2025 20:09:02 +0000 (0:00:00.276) 0:00:00.276 ********* 2025-07-12 20:09:10.848954 | orchestrator | ok: [testbed-manager] 2025-07-12 20:09:10.848961 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:10.848968 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:09:10.848975 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:09:10.849032 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:09:10.849040 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:09:10.849047 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:09:10.849053 | orchestrator | 2025-07-12 20:09:10.849060 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 20:09:10.849067 | orchestrator | Saturday 12 July 2025 20:09:03 +0000 (0:00:01.051) 0:00:01.328 ********* 2025-07-12 20:09:10.849073 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:09:10.849081 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:10.849087 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:10.849094 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:10.849100 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:10.849107 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:10.849113 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:10.849120 | orchestrator | 2025-07-12 20:09:10.849126 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 20:09:10.849133 | orchestrator | 2025-07-12 20:09:10.849152 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 20:09:10.849159 | orchestrator | Saturday 12 July 2025 20:09:05 +0000 (0:00:01.216) 0:00:02.545 ********* 2025-07-12 20:09:10.849166 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:09:10.849172 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:09:10.849179 | orchestrator | ok: [testbed-manager] 2025-07-12 20:09:10.849186 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:09:10.849192 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:09:10.849199 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:09:10.849205 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:09:10.849212 | orchestrator | 2025-07-12 20:09:10.849218 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 20:09:10.849225 | orchestrator | 2025-07-12 20:09:10.849231 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 20:09:10.849238 | orchestrator | Saturday 12 July 2025 20:09:09 +0000 (0:00:04.751) 0:00:07.296 ********* 2025-07-12 20:09:10.849245 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:09:10.849251 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:09:10.849257 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:09:10.849264 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:09:10.849270 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:10.849277 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:10.849283 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:10.849289 | orchestrator | 2025-07-12 20:09:10.849296 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:09:10.849303 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:09:10.849311 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:09:10.849318 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:09:10.849324 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:09:10.849331 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:09:10.849337 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:09:10.849344 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:09:10.849350 | orchestrator | 2025-07-12 20:09:10.849357 | orchestrator | 2025-07-12 20:09:10.849363 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:09:10.849378 | orchestrator | Saturday 12 July 2025 20:09:10 +0000 (0:00:00.706) 0:00:08.003 ********* 2025-07-12 20:09:10.849386 | orchestrator | =============================================================================== 2025-07-12 20:09:10.849394 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.75s 2025-07-12 20:09:10.849401 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.22s 2025-07-12 20:09:10.849409 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.05s 2025-07-12 20:09:10.849416 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.71s 2025-07-12 20:09:13.122364 | orchestrator | 2025-07-12 20:09:13 | INFO  | Task d7ea6156-00a8-4435-afa2-675c3fe7087e (ceph-configure-lvm-volumes) was prepared for execution. 2025-07-12 20:09:13.122473 | orchestrator | 2025-07-12 20:09:13 | INFO  | It takes a moment until task d7ea6156-00a8-4435-afa2-675c3fe7087e (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-07-12 20:09:25.143522 | orchestrator | 2025-07-12 20:09:25.143611 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 20:09:25.143623 | orchestrator | 2025-07-12 20:09:25.143631 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 20:09:25.143639 | orchestrator | Saturday 12 July 2025 20:09:17 +0000 (0:00:00.317) 0:00:00.317 ********* 2025-07-12 20:09:25.143665 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:09:25.143673 | orchestrator | 2025-07-12 20:09:25.143681 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 20:09:25.143688 | orchestrator | Saturday 12 July 2025 20:09:17 +0000 (0:00:00.236) 0:00:00.554 ********* 2025-07-12 20:09:25.143696 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:09:25.143704 | orchestrator | 2025-07-12 20:09:25.143712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.143719 | orchestrator | Saturday 12 July 2025 20:09:17 +0000 (0:00:00.230) 0:00:00.784 ********* 2025-07-12 20:09:25.143727 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-12 20:09:25.143734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-12 20:09:25.143749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-12 20:09:25.143766 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-12 20:09:25.143774 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-12 20:09:25.143782 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-12 20:09:25.143789 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-12 20:09:25.143796 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-12 20:09:25.143804 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-12 20:09:25.143811 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-12 20:09:25.143818 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-12 20:09:25.143825 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-12 20:09:25.143832 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-12 20:09:25.143840 | orchestrator | 2025-07-12 20:09:25.143847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.143854 | orchestrator | Saturday 12 July 2025 20:09:18 +0000 (0:00:00.367) 0:00:01.152 ********* 2025-07-12 20:09:25.143862 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.143869 | orchestrator | 2025-07-12 20:09:25.143891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.143899 | orchestrator | Saturday 12 July 2025 20:09:18 +0000 (0:00:00.530) 0:00:01.682 ********* 2025-07-12 20:09:25.143906 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.143913 | orchestrator | 2025-07-12 20:09:25.143920 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.143928 | orchestrator | Saturday 12 July 2025 20:09:18 +0000 (0:00:00.222) 0:00:01.905 ********* 2025-07-12 20:09:25.143935 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.143942 | orchestrator | 2025-07-12 20:09:25.143949 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.143956 | orchestrator | Saturday 12 July 2025 20:09:19 +0000 (0:00:00.191) 0:00:02.097 ********* 2025-07-12 20:09:25.143964 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.143971 | orchestrator | 2025-07-12 20:09:25.143982 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.143990 | orchestrator | Saturday 12 July 2025 20:09:19 +0000 (0:00:00.235) 0:00:02.332 ********* 2025-07-12 20:09:25.143997 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144004 | orchestrator | 2025-07-12 20:09:25.144011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.144070 | orchestrator | Saturday 12 July 2025 20:09:19 +0000 (0:00:00.184) 0:00:02.517 ********* 2025-07-12 20:09:25.144080 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144089 | orchestrator | 2025-07-12 20:09:25.144097 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.144106 | orchestrator | Saturday 12 July 2025 20:09:19 +0000 (0:00:00.199) 0:00:02.716 ********* 2025-07-12 20:09:25.144122 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144131 | orchestrator | 2025-07-12 20:09:25.144140 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.144148 | orchestrator | Saturday 12 July 2025 20:09:19 +0000 (0:00:00.213) 0:00:02.929 ********* 2025-07-12 20:09:25.144156 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144164 | orchestrator | 2025-07-12 20:09:25.144172 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.144181 | orchestrator | Saturday 12 July 2025 20:09:20 +0000 (0:00:00.198) 0:00:03.128 ********* 2025-07-12 20:09:25.144190 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0) 2025-07-12 20:09:25.144200 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0) 2025-07-12 20:09:25.144208 | orchestrator | 2025-07-12 20:09:25.144217 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.144225 | orchestrator | Saturday 12 July 2025 20:09:20 +0000 (0:00:00.398) 0:00:03.526 ********* 2025-07-12 20:09:25.144252 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1) 2025-07-12 20:09:25.144265 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1) 2025-07-12 20:09:25.144276 | orchestrator | 2025-07-12 20:09:25.144288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.144306 | orchestrator | Saturday 12 July 2025 20:09:20 +0000 (0:00:00.395) 0:00:03.922 ********* 2025-07-12 20:09:25.144317 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a) 2025-07-12 20:09:25.144328 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a) 2025-07-12 20:09:25.144352 | orchestrator | 2025-07-12 20:09:25.144365 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.144376 | orchestrator | Saturday 12 July 2025 20:09:21 +0000 (0:00:00.636) 0:00:04.558 ********* 2025-07-12 20:09:25.144388 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2) 2025-07-12 20:09:25.144413 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2) 2025-07-12 20:09:25.144425 | orchestrator | 2025-07-12 20:09:25.144436 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:25.144444 | orchestrator | Saturday 12 July 2025 20:09:22 +0000 (0:00:00.662) 0:00:05.221 ********* 2025-07-12 20:09:25.144451 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 20:09:25.144458 | orchestrator | 2025-07-12 20:09:25.144465 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:25.144473 | orchestrator | Saturday 12 July 2025 20:09:23 +0000 (0:00:00.745) 0:00:05.967 ********* 2025-07-12 20:09:25.144480 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-12 20:09:25.144487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-12 20:09:25.144494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-12 20:09:25.144501 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-12 20:09:25.144508 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-12 20:09:25.144515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-12 20:09:25.144522 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-12 20:09:25.144529 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-12 20:09:25.144536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-12 20:09:25.144543 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-12 20:09:25.144550 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-12 20:09:25.144557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-12 20:09:25.144564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-12 20:09:25.144572 | orchestrator | 2025-07-12 20:09:25.144579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:25.144586 | orchestrator | Saturday 12 July 2025 20:09:23 +0000 (0:00:00.398) 0:00:06.365 ********* 2025-07-12 20:09:25.144593 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144600 | orchestrator | 2025-07-12 20:09:25.144607 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:25.144614 | orchestrator | Saturday 12 July 2025 20:09:23 +0000 (0:00:00.206) 0:00:06.571 ********* 2025-07-12 20:09:25.144621 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144628 | orchestrator | 2025-07-12 20:09:25.144635 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:25.144642 | orchestrator | Saturday 12 July 2025 20:09:23 +0000 (0:00:00.203) 0:00:06.775 ********* 2025-07-12 20:09:25.144649 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144656 | orchestrator | 2025-07-12 20:09:25.144663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:25.144671 | orchestrator | Saturday 12 July 2025 20:09:24 +0000 (0:00:00.227) 0:00:07.003 ********* 2025-07-12 20:09:25.144678 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144685 | orchestrator | 2025-07-12 20:09:25.144691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:25.144699 | orchestrator | Saturday 12 July 2025 20:09:24 +0000 (0:00:00.211) 0:00:07.215 ********* 2025-07-12 20:09:25.144706 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144713 | orchestrator | 2025-07-12 20:09:25.144720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:25.144732 | orchestrator | Saturday 12 July 2025 20:09:24 +0000 (0:00:00.200) 0:00:07.415 ********* 2025-07-12 20:09:25.144740 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144747 | orchestrator | 2025-07-12 20:09:25.144754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:25.144761 | orchestrator | Saturday 12 July 2025 20:09:24 +0000 (0:00:00.213) 0:00:07.629 ********* 2025-07-12 20:09:25.144768 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:25.144775 | orchestrator | 2025-07-12 20:09:25.144782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:25.144799 | orchestrator | Saturday 12 July 2025 20:09:24 +0000 (0:00:00.224) 0:00:07.853 ********* 2025-07-12 20:09:25.144814 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520107 | orchestrator | 2025-07-12 20:09:32.520201 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:32.520215 | orchestrator | Saturday 12 July 2025 20:09:25 +0000 (0:00:00.213) 0:00:08.067 ********* 2025-07-12 20:09:32.520225 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-12 20:09:32.520235 | orchestrator | 2025-07-12 20:09:32.520244 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:32.520253 | orchestrator | Saturday 12 July 2025 20:09:25 +0000 (0:00:00.539) 0:00:08.607 ********* 2025-07-12 20:09:32.520262 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520270 | orchestrator | 2025-07-12 20:09:32.520293 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:32.520303 | orchestrator | Saturday 12 July 2025 20:09:26 +0000 (0:00:00.543) 0:00:09.150 ********* 2025-07-12 20:09:32.520311 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520320 | orchestrator | 2025-07-12 20:09:32.520328 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:32.520337 | orchestrator | Saturday 12 July 2025 20:09:26 +0000 (0:00:00.182) 0:00:09.333 ********* 2025-07-12 20:09:32.520345 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520354 | orchestrator | 2025-07-12 20:09:32.520362 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:32.520371 | orchestrator | Saturday 12 July 2025 20:09:26 +0000 (0:00:00.186) 0:00:09.519 ********* 2025-07-12 20:09:32.520379 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520387 | orchestrator | 2025-07-12 20:09:32.520396 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 20:09:32.520404 | orchestrator | Saturday 12 July 2025 20:09:26 +0000 (0:00:00.204) 0:00:09.723 ********* 2025-07-12 20:09:32.520413 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-07-12 20:09:32.520421 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-07-12 20:09:32.520430 | orchestrator | 2025-07-12 20:09:32.520438 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 20:09:32.520447 | orchestrator | Saturday 12 July 2025 20:09:26 +0000 (0:00:00.151) 0:00:09.875 ********* 2025-07-12 20:09:32.520455 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520463 | orchestrator | 2025-07-12 20:09:32.520472 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 20:09:32.520480 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:00.127) 0:00:10.002 ********* 2025-07-12 20:09:32.520489 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520497 | orchestrator | 2025-07-12 20:09:32.520505 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 20:09:32.520514 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:00.136) 0:00:10.139 ********* 2025-07-12 20:09:32.520522 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520531 | orchestrator | 2025-07-12 20:09:32.520539 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 20:09:32.520548 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:00.124) 0:00:10.263 ********* 2025-07-12 20:09:32.520574 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:09:32.520583 | orchestrator | 2025-07-12 20:09:32.520592 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 20:09:32.520600 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:00.121) 0:00:10.385 ********* 2025-07-12 20:09:32.520609 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a733058e-5b74-5553-b3bf-66d1cbf46d31'}}) 2025-07-12 20:09:32.520618 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8d632655-ba67-5245-89a0-0cb971b00289'}}) 2025-07-12 20:09:32.520627 | orchestrator | 2025-07-12 20:09:32.520636 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 20:09:32.520646 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:00.149) 0:00:10.534 ********* 2025-07-12 20:09:32.520656 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a733058e-5b74-5553-b3bf-66d1cbf46d31'}})  2025-07-12 20:09:32.520673 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8d632655-ba67-5245-89a0-0cb971b00289'}})  2025-07-12 20:09:32.520683 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520694 | orchestrator | 2025-07-12 20:09:32.520704 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 20:09:32.520714 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:00.136) 0:00:10.671 ********* 2025-07-12 20:09:32.520724 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a733058e-5b74-5553-b3bf-66d1cbf46d31'}})  2025-07-12 20:09:32.520733 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8d632655-ba67-5245-89a0-0cb971b00289'}})  2025-07-12 20:09:32.520741 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520750 | orchestrator | 2025-07-12 20:09:32.520758 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 20:09:32.520779 | orchestrator | Saturday 12 July 2025 20:09:27 +0000 (0:00:00.154) 0:00:10.825 ********* 2025-07-12 20:09:32.520788 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a733058e-5b74-5553-b3bf-66d1cbf46d31'}})  2025-07-12 20:09:32.520797 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8d632655-ba67-5245-89a0-0cb971b00289'}})  2025-07-12 20:09:32.520805 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520814 | orchestrator | 2025-07-12 20:09:32.520822 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 20:09:32.520831 | orchestrator | Saturday 12 July 2025 20:09:28 +0000 (0:00:00.291) 0:00:11.117 ********* 2025-07-12 20:09:32.520853 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:09:32.520863 | orchestrator | 2025-07-12 20:09:32.520871 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 20:09:32.520880 | orchestrator | Saturday 12 July 2025 20:09:28 +0000 (0:00:00.145) 0:00:11.263 ********* 2025-07-12 20:09:32.520888 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:09:32.520897 | orchestrator | 2025-07-12 20:09:32.520905 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 20:09:32.520913 | orchestrator | Saturday 12 July 2025 20:09:28 +0000 (0:00:00.144) 0:00:11.407 ********* 2025-07-12 20:09:32.520922 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520930 | orchestrator | 2025-07-12 20:09:32.520939 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 20:09:32.520947 | orchestrator | Saturday 12 July 2025 20:09:28 +0000 (0:00:00.131) 0:00:11.539 ********* 2025-07-12 20:09:32.520956 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.520964 | orchestrator | 2025-07-12 20:09:32.520972 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 20:09:32.520981 | orchestrator | Saturday 12 July 2025 20:09:28 +0000 (0:00:00.122) 0:00:11.662 ********* 2025-07-12 20:09:32.520989 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.521005 | orchestrator | 2025-07-12 20:09:32.521013 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 20:09:32.521053 | orchestrator | Saturday 12 July 2025 20:09:28 +0000 (0:00:00.149) 0:00:11.811 ********* 2025-07-12 20:09:32.521063 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 20:09:32.521071 | orchestrator |  "ceph_osd_devices": { 2025-07-12 20:09:32.521080 | orchestrator |  "sdb": { 2025-07-12 20:09:32.521088 | orchestrator |  "osd_lvm_uuid": "a733058e-5b74-5553-b3bf-66d1cbf46d31" 2025-07-12 20:09:32.521097 | orchestrator |  }, 2025-07-12 20:09:32.521105 | orchestrator |  "sdc": { 2025-07-12 20:09:32.521114 | orchestrator |  "osd_lvm_uuid": "8d632655-ba67-5245-89a0-0cb971b00289" 2025-07-12 20:09:32.521122 | orchestrator |  } 2025-07-12 20:09:32.521131 | orchestrator |  } 2025-07-12 20:09:32.521139 | orchestrator | } 2025-07-12 20:09:32.521148 | orchestrator | 2025-07-12 20:09:32.521156 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 20:09:32.521164 | orchestrator | Saturday 12 July 2025 20:09:29 +0000 (0:00:00.170) 0:00:11.981 ********* 2025-07-12 20:09:32.521173 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.521181 | orchestrator | 2025-07-12 20:09:32.521190 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 20:09:32.521198 | orchestrator | Saturday 12 July 2025 20:09:29 +0000 (0:00:00.118) 0:00:12.100 ********* 2025-07-12 20:09:32.521206 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.521215 | orchestrator | 2025-07-12 20:09:32.521223 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 20:09:32.521236 | orchestrator | Saturday 12 July 2025 20:09:29 +0000 (0:00:00.123) 0:00:12.223 ********* 2025-07-12 20:09:32.521245 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:09:32.521254 | orchestrator | 2025-07-12 20:09:32.521262 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 20:09:32.521271 | orchestrator | Saturday 12 July 2025 20:09:29 +0000 (0:00:00.129) 0:00:12.353 ********* 2025-07-12 20:09:32.521279 | orchestrator | changed: [testbed-node-3] => { 2025-07-12 20:09:32.521287 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 20:09:32.521296 | orchestrator |  "ceph_osd_devices": { 2025-07-12 20:09:32.521304 | orchestrator |  "sdb": { 2025-07-12 20:09:32.521313 | orchestrator |  "osd_lvm_uuid": "a733058e-5b74-5553-b3bf-66d1cbf46d31" 2025-07-12 20:09:32.521321 | orchestrator |  }, 2025-07-12 20:09:32.521330 | orchestrator |  "sdc": { 2025-07-12 20:09:32.521338 | orchestrator |  "osd_lvm_uuid": "8d632655-ba67-5245-89a0-0cb971b00289" 2025-07-12 20:09:32.521346 | orchestrator |  } 2025-07-12 20:09:32.521355 | orchestrator |  }, 2025-07-12 20:09:32.521363 | orchestrator |  "lvm_volumes": [ 2025-07-12 20:09:32.521372 | orchestrator |  { 2025-07-12 20:09:32.521380 | orchestrator |  "data": "osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31", 2025-07-12 20:09:32.521388 | orchestrator |  "data_vg": "ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31" 2025-07-12 20:09:32.521397 | orchestrator |  }, 2025-07-12 20:09:32.521405 | orchestrator |  { 2025-07-12 20:09:32.521414 | orchestrator |  "data": "osd-block-8d632655-ba67-5245-89a0-0cb971b00289", 2025-07-12 20:09:32.521422 | orchestrator |  "data_vg": "ceph-8d632655-ba67-5245-89a0-0cb971b00289" 2025-07-12 20:09:32.521430 | orchestrator |  } 2025-07-12 20:09:32.521439 | orchestrator |  ] 2025-07-12 20:09:32.521447 | orchestrator |  } 2025-07-12 20:09:32.521456 | orchestrator | } 2025-07-12 20:09:32.521464 | orchestrator | 2025-07-12 20:09:32.521472 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 20:09:32.521481 | orchestrator | Saturday 12 July 2025 20:09:29 +0000 (0:00:00.219) 0:00:12.572 ********* 2025-07-12 20:09:32.521489 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:09:32.521498 | orchestrator | 2025-07-12 20:09:32.521506 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 20:09:32.521520 | orchestrator | 2025-07-12 20:09:32.521529 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 20:09:32.521537 | orchestrator | Saturday 12 July 2025 20:09:31 +0000 (0:00:02.012) 0:00:14.585 ********* 2025-07-12 20:09:32.521546 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 20:09:32.521554 | orchestrator | 2025-07-12 20:09:32.521563 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 20:09:32.521571 | orchestrator | Saturday 12 July 2025 20:09:31 +0000 (0:00:00.231) 0:00:14.817 ********* 2025-07-12 20:09:32.521580 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:09:32.521588 | orchestrator | 2025-07-12 20:09:32.521597 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:32.521605 | orchestrator | Saturday 12 July 2025 20:09:32 +0000 (0:00:00.235) 0:00:15.052 ********* 2025-07-12 20:09:32.521613 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-12 20:09:32.521627 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-12 20:09:40.274261 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-12 20:09:40.274366 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-12 20:09:40.274379 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-12 20:09:40.274390 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-12 20:09:40.274401 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-12 20:09:40.274411 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-12 20:09:40.274422 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-12 20:09:40.274433 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-12 20:09:40.274443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-12 20:09:40.274454 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-12 20:09:40.274482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-12 20:09:40.274494 | orchestrator | 2025-07-12 20:09:40.274505 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.274517 | orchestrator | Saturday 12 July 2025 20:09:32 +0000 (0:00:00.392) 0:00:15.445 ********* 2025-07-12 20:09:40.274528 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.274545 | orchestrator | 2025-07-12 20:09:40.274556 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.274568 | orchestrator | Saturday 12 July 2025 20:09:32 +0000 (0:00:00.211) 0:00:15.656 ********* 2025-07-12 20:09:40.274579 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.274589 | orchestrator | 2025-07-12 20:09:40.274600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.274611 | orchestrator | Saturday 12 July 2025 20:09:32 +0000 (0:00:00.199) 0:00:15.856 ********* 2025-07-12 20:09:40.274621 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.274632 | orchestrator | 2025-07-12 20:09:40.274642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.274653 | orchestrator | Saturday 12 July 2025 20:09:33 +0000 (0:00:00.202) 0:00:16.058 ********* 2025-07-12 20:09:40.274664 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.274674 | orchestrator | 2025-07-12 20:09:40.274685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.274695 | orchestrator | Saturday 12 July 2025 20:09:33 +0000 (0:00:00.198) 0:00:16.257 ********* 2025-07-12 20:09:40.274729 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.274741 | orchestrator | 2025-07-12 20:09:40.274751 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.274762 | orchestrator | Saturday 12 July 2025 20:09:33 +0000 (0:00:00.195) 0:00:16.453 ********* 2025-07-12 20:09:40.274773 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.274783 | orchestrator | 2025-07-12 20:09:40.274794 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.274807 | orchestrator | Saturday 12 July 2025 20:09:34 +0000 (0:00:00.578) 0:00:17.032 ********* 2025-07-12 20:09:40.274820 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.274832 | orchestrator | 2025-07-12 20:09:40.274845 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.274857 | orchestrator | Saturday 12 July 2025 20:09:34 +0000 (0:00:00.230) 0:00:17.262 ********* 2025-07-12 20:09:40.274869 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.274881 | orchestrator | 2025-07-12 20:09:40.274893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.274906 | orchestrator | Saturday 12 July 2025 20:09:34 +0000 (0:00:00.217) 0:00:17.480 ********* 2025-07-12 20:09:40.274919 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c) 2025-07-12 20:09:40.274932 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c) 2025-07-12 20:09:40.274944 | orchestrator | 2025-07-12 20:09:40.274957 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.274969 | orchestrator | Saturday 12 July 2025 20:09:35 +0000 (0:00:00.482) 0:00:17.963 ********* 2025-07-12 20:09:40.274982 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c) 2025-07-12 20:09:40.274994 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c) 2025-07-12 20:09:40.275007 | orchestrator | 2025-07-12 20:09:40.275019 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.275069 | orchestrator | Saturday 12 July 2025 20:09:35 +0000 (0:00:00.435) 0:00:18.398 ********* 2025-07-12 20:09:40.275083 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5) 2025-07-12 20:09:40.275095 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5) 2025-07-12 20:09:40.275108 | orchestrator | 2025-07-12 20:09:40.275120 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.275132 | orchestrator | Saturday 12 July 2025 20:09:35 +0000 (0:00:00.426) 0:00:18.825 ********* 2025-07-12 20:09:40.275145 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808) 2025-07-12 20:09:40.275174 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808) 2025-07-12 20:09:40.275188 | orchestrator | 2025-07-12 20:09:40.275201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:40.275212 | orchestrator | Saturday 12 July 2025 20:09:36 +0000 (0:00:00.436) 0:00:19.262 ********* 2025-07-12 20:09:40.275223 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 20:09:40.275234 | orchestrator | 2025-07-12 20:09:40.275245 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275256 | orchestrator | Saturday 12 July 2025 20:09:36 +0000 (0:00:00.335) 0:00:19.598 ********* 2025-07-12 20:09:40.275267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-12 20:09:40.275284 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-12 20:09:40.275295 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-12 20:09:40.275305 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-12 20:09:40.275324 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-12 20:09:40.275334 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-12 20:09:40.275345 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-12 20:09:40.275355 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-12 20:09:40.275365 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-12 20:09:40.275376 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-12 20:09:40.275401 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-12 20:09:40.275412 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-12 20:09:40.275434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-12 20:09:40.275445 | orchestrator | 2025-07-12 20:09:40.275455 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275466 | orchestrator | Saturday 12 July 2025 20:09:37 +0000 (0:00:00.377) 0:00:19.975 ********* 2025-07-12 20:09:40.275477 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275487 | orchestrator | 2025-07-12 20:09:40.275498 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275509 | orchestrator | Saturday 12 July 2025 20:09:37 +0000 (0:00:00.211) 0:00:20.187 ********* 2025-07-12 20:09:40.275519 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275530 | orchestrator | 2025-07-12 20:09:40.275540 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275551 | orchestrator | Saturday 12 July 2025 20:09:37 +0000 (0:00:00.198) 0:00:20.386 ********* 2025-07-12 20:09:40.275562 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275572 | orchestrator | 2025-07-12 20:09:40.275583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275594 | orchestrator | Saturday 12 July 2025 20:09:38 +0000 (0:00:00.737) 0:00:21.123 ********* 2025-07-12 20:09:40.275605 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275615 | orchestrator | 2025-07-12 20:09:40.275626 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275637 | orchestrator | Saturday 12 July 2025 20:09:38 +0000 (0:00:00.204) 0:00:21.327 ********* 2025-07-12 20:09:40.275648 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275658 | orchestrator | 2025-07-12 20:09:40.275669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275680 | orchestrator | Saturday 12 July 2025 20:09:38 +0000 (0:00:00.268) 0:00:21.596 ********* 2025-07-12 20:09:40.275690 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275701 | orchestrator | 2025-07-12 20:09:40.275712 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275722 | orchestrator | Saturday 12 July 2025 20:09:38 +0000 (0:00:00.224) 0:00:21.821 ********* 2025-07-12 20:09:40.275733 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275743 | orchestrator | 2025-07-12 20:09:40.275754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275765 | orchestrator | Saturday 12 July 2025 20:09:39 +0000 (0:00:00.195) 0:00:22.016 ********* 2025-07-12 20:09:40.275775 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275786 | orchestrator | 2025-07-12 20:09:40.275797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275807 | orchestrator | Saturday 12 July 2025 20:09:39 +0000 (0:00:00.231) 0:00:22.248 ********* 2025-07-12 20:09:40.275818 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-12 20:09:40.275828 | orchestrator | 2025-07-12 20:09:40.275846 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275857 | orchestrator | Saturday 12 July 2025 20:09:39 +0000 (0:00:00.311) 0:00:22.560 ********* 2025-07-12 20:09:40.275868 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275878 | orchestrator | 2025-07-12 20:09:40.275889 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275899 | orchestrator | Saturday 12 July 2025 20:09:39 +0000 (0:00:00.210) 0:00:22.771 ********* 2025-07-12 20:09:40.275910 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275921 | orchestrator | 2025-07-12 20:09:40.275931 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:40.275942 | orchestrator | Saturday 12 July 2025 20:09:40 +0000 (0:00:00.220) 0:00:22.991 ********* 2025-07-12 20:09:40.275953 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:40.275963 | orchestrator | 2025-07-12 20:09:40.275980 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:46.099735 | orchestrator | Saturday 12 July 2025 20:09:40 +0000 (0:00:00.202) 0:00:23.194 ********* 2025-07-12 20:09:46.099880 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.099905 | orchestrator | 2025-07-12 20:09:46.099921 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 20:09:46.099933 | orchestrator | Saturday 12 July 2025 20:09:40 +0000 (0:00:00.199) 0:00:23.393 ********* 2025-07-12 20:09:46.099950 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-07-12 20:09:46.099966 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-07-12 20:09:46.099994 | orchestrator | 2025-07-12 20:09:46.100074 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 20:09:46.100094 | orchestrator | Saturday 12 July 2025 20:09:40 +0000 (0:00:00.166) 0:00:23.560 ********* 2025-07-12 20:09:46.100110 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.100121 | orchestrator | 2025-07-12 20:09:46.100136 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 20:09:46.100155 | orchestrator | Saturday 12 July 2025 20:09:40 +0000 (0:00:00.358) 0:00:23.918 ********* 2025-07-12 20:09:46.100183 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.100197 | orchestrator | 2025-07-12 20:09:46.100227 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 20:09:46.100241 | orchestrator | Saturday 12 July 2025 20:09:41 +0000 (0:00:00.137) 0:00:24.056 ********* 2025-07-12 20:09:46.100256 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.100275 | orchestrator | 2025-07-12 20:09:46.100292 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 20:09:46.100305 | orchestrator | Saturday 12 July 2025 20:09:41 +0000 (0:00:00.142) 0:00:24.198 ********* 2025-07-12 20:09:46.100344 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:09:46.100358 | orchestrator | 2025-07-12 20:09:46.100377 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 20:09:46.100399 | orchestrator | Saturday 12 July 2025 20:09:41 +0000 (0:00:00.142) 0:00:24.341 ********* 2025-07-12 20:09:46.100413 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2ea885c-c09d-528a-8e30-9d64ecae89b3'}}) 2025-07-12 20:09:46.100429 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5037a2b3-768c-53ee-9f72-df4915d4fb6f'}}) 2025-07-12 20:09:46.100448 | orchestrator | 2025-07-12 20:09:46.100465 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 20:09:46.100477 | orchestrator | Saturday 12 July 2025 20:09:41 +0000 (0:00:00.159) 0:00:24.500 ********* 2025-07-12 20:09:46.100489 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2ea885c-c09d-528a-8e30-9d64ecae89b3'}})  2025-07-12 20:09:46.100503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5037a2b3-768c-53ee-9f72-df4915d4fb6f'}})  2025-07-12 20:09:46.100543 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.100557 | orchestrator | 2025-07-12 20:09:46.100569 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 20:09:46.100585 | orchestrator | Saturday 12 July 2025 20:09:41 +0000 (0:00:00.147) 0:00:24.648 ********* 2025-07-12 20:09:46.100600 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2ea885c-c09d-528a-8e30-9d64ecae89b3'}})  2025-07-12 20:09:46.100612 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5037a2b3-768c-53ee-9f72-df4915d4fb6f'}})  2025-07-12 20:09:46.100626 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.100643 | orchestrator | 2025-07-12 20:09:46.100655 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 20:09:46.100668 | orchestrator | Saturday 12 July 2025 20:09:41 +0000 (0:00:00.157) 0:00:24.806 ********* 2025-07-12 20:09:46.100682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2ea885c-c09d-528a-8e30-9d64ecae89b3'}})  2025-07-12 20:09:46.100695 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5037a2b3-768c-53ee-9f72-df4915d4fb6f'}})  2025-07-12 20:09:46.100707 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.100721 | orchestrator | 2025-07-12 20:09:46.100736 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 20:09:46.100750 | orchestrator | Saturday 12 July 2025 20:09:42 +0000 (0:00:00.144) 0:00:24.951 ********* 2025-07-12 20:09:46.100761 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:09:46.100778 | orchestrator | 2025-07-12 20:09:46.100793 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 20:09:46.100806 | orchestrator | Saturday 12 July 2025 20:09:42 +0000 (0:00:00.142) 0:00:25.093 ********* 2025-07-12 20:09:46.100819 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:09:46.100834 | orchestrator | 2025-07-12 20:09:46.100848 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 20:09:46.100860 | orchestrator | Saturday 12 July 2025 20:09:42 +0000 (0:00:00.147) 0:00:25.241 ********* 2025-07-12 20:09:46.100874 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.100888 | orchestrator | 2025-07-12 20:09:46.100902 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 20:09:46.100914 | orchestrator | Saturday 12 July 2025 20:09:42 +0000 (0:00:00.127) 0:00:25.369 ********* 2025-07-12 20:09:46.100929 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.100944 | orchestrator | 2025-07-12 20:09:46.100959 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 20:09:46.100971 | orchestrator | Saturday 12 July 2025 20:09:42 +0000 (0:00:00.127) 0:00:25.496 ********* 2025-07-12 20:09:46.100984 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.101001 | orchestrator | 2025-07-12 20:09:46.101067 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 20:09:46.101081 | orchestrator | Saturday 12 July 2025 20:09:42 +0000 (0:00:00.334) 0:00:25.831 ********* 2025-07-12 20:09:46.101098 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 20:09:46.101113 | orchestrator |  "ceph_osd_devices": { 2025-07-12 20:09:46.101129 | orchestrator |  "sdb": { 2025-07-12 20:09:46.101141 | orchestrator |  "osd_lvm_uuid": "c2ea885c-c09d-528a-8e30-9d64ecae89b3" 2025-07-12 20:09:46.101155 | orchestrator |  }, 2025-07-12 20:09:46.101169 | orchestrator |  "sdc": { 2025-07-12 20:09:46.101184 | orchestrator |  "osd_lvm_uuid": "5037a2b3-768c-53ee-9f72-df4915d4fb6f" 2025-07-12 20:09:46.101196 | orchestrator |  } 2025-07-12 20:09:46.101210 | orchestrator |  } 2025-07-12 20:09:46.101224 | orchestrator | } 2025-07-12 20:09:46.101240 | orchestrator | 2025-07-12 20:09:46.101252 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 20:09:46.101266 | orchestrator | Saturday 12 July 2025 20:09:43 +0000 (0:00:00.181) 0:00:26.012 ********* 2025-07-12 20:09:46.101281 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.101310 | orchestrator | 2025-07-12 20:09:46.101325 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 20:09:46.101340 | orchestrator | Saturday 12 July 2025 20:09:43 +0000 (0:00:00.120) 0:00:26.132 ********* 2025-07-12 20:09:46.101385 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.101397 | orchestrator | 2025-07-12 20:09:46.101409 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 20:09:46.101430 | orchestrator | Saturday 12 July 2025 20:09:43 +0000 (0:00:00.129) 0:00:26.261 ********* 2025-07-12 20:09:46.101442 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:09:46.101453 | orchestrator | 2025-07-12 20:09:46.101467 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 20:09:46.101482 | orchestrator | Saturday 12 July 2025 20:09:43 +0000 (0:00:00.128) 0:00:26.390 ********* 2025-07-12 20:09:46.101494 | orchestrator | changed: [testbed-node-4] => { 2025-07-12 20:09:46.101508 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 20:09:46.101523 | orchestrator |  "ceph_osd_devices": { 2025-07-12 20:09:46.101535 | orchestrator |  "sdb": { 2025-07-12 20:09:46.101549 | orchestrator |  "osd_lvm_uuid": "c2ea885c-c09d-528a-8e30-9d64ecae89b3" 2025-07-12 20:09:46.101564 | orchestrator |  }, 2025-07-12 20:09:46.101577 | orchestrator |  "sdc": { 2025-07-12 20:09:46.101590 | orchestrator |  "osd_lvm_uuid": "5037a2b3-768c-53ee-9f72-df4915d4fb6f" 2025-07-12 20:09:46.101605 | orchestrator |  } 2025-07-12 20:09:46.101618 | orchestrator |  }, 2025-07-12 20:09:46.101629 | orchestrator |  "lvm_volumes": [ 2025-07-12 20:09:46.101641 | orchestrator |  { 2025-07-12 20:09:46.101654 | orchestrator |  "data": "osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3", 2025-07-12 20:09:46.101672 | orchestrator |  "data_vg": "ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3" 2025-07-12 20:09:46.101686 | orchestrator |  }, 2025-07-12 20:09:46.101699 | orchestrator |  { 2025-07-12 20:09:46.101710 | orchestrator |  "data": "osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f", 2025-07-12 20:09:46.101722 | orchestrator |  "data_vg": "ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f" 2025-07-12 20:09:46.101758 | orchestrator |  } 2025-07-12 20:09:46.101770 | orchestrator |  ] 2025-07-12 20:09:46.101783 | orchestrator |  } 2025-07-12 20:09:46.101798 | orchestrator | } 2025-07-12 20:09:46.101810 | orchestrator | 2025-07-12 20:09:46.101820 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 20:09:46.101834 | orchestrator | Saturday 12 July 2025 20:09:43 +0000 (0:00:00.198) 0:00:26.589 ********* 2025-07-12 20:09:46.101848 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 20:09:46.101862 | orchestrator | 2025-07-12 20:09:46.101872 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-07-12 20:09:46.101886 | orchestrator | 2025-07-12 20:09:46.101900 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 20:09:46.101913 | orchestrator | Saturday 12 July 2025 20:09:44 +0000 (0:00:00.983) 0:00:27.573 ********* 2025-07-12 20:09:46.101924 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 20:09:46.101940 | orchestrator | 2025-07-12 20:09:46.101955 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 20:09:46.101969 | orchestrator | Saturday 12 July 2025 20:09:44 +0000 (0:00:00.236) 0:00:27.809 ********* 2025-07-12 20:09:46.101980 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:09:46.101994 | orchestrator | 2025-07-12 20:09:46.102010 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:46.102118 | orchestrator | Saturday 12 July 2025 20:09:45 +0000 (0:00:00.244) 0:00:28.054 ********* 2025-07-12 20:09:46.102135 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-12 20:09:46.102147 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-12 20:09:46.102178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-12 20:09:46.102194 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-12 20:09:46.102222 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-12 20:09:46.102237 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-12 20:09:46.102251 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-12 20:09:46.102263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-12 20:09:46.102273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-12 20:09:46.102284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-12 20:09:46.102294 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-12 20:09:46.102322 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-12 20:09:54.676432 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-12 20:09:54.676618 | orchestrator | 2025-07-12 20:09:54.676635 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.676647 | orchestrator | Saturday 12 July 2025 20:09:46 +0000 (0:00:00.956) 0:00:29.010 ********* 2025-07-12 20:09:54.676658 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.676669 | orchestrator | 2025-07-12 20:09:54.676680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.676691 | orchestrator | Saturday 12 July 2025 20:09:46 +0000 (0:00:00.228) 0:00:29.239 ********* 2025-07-12 20:09:54.676702 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.676712 | orchestrator | 2025-07-12 20:09:54.676723 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.676733 | orchestrator | Saturday 12 July 2025 20:09:46 +0000 (0:00:00.279) 0:00:29.518 ********* 2025-07-12 20:09:54.676744 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.676754 | orchestrator | 2025-07-12 20:09:54.676765 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.676776 | orchestrator | Saturday 12 July 2025 20:09:46 +0000 (0:00:00.223) 0:00:29.742 ********* 2025-07-12 20:09:54.676786 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.676797 | orchestrator | 2025-07-12 20:09:54.676807 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.676818 | orchestrator | Saturday 12 July 2025 20:09:47 +0000 (0:00:00.213) 0:00:29.955 ********* 2025-07-12 20:09:54.676828 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.676839 | orchestrator | 2025-07-12 20:09:54.676849 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.676860 | orchestrator | Saturday 12 July 2025 20:09:47 +0000 (0:00:00.233) 0:00:30.189 ********* 2025-07-12 20:09:54.676870 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.676881 | orchestrator | 2025-07-12 20:09:54.676891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.676902 | orchestrator | Saturday 12 July 2025 20:09:47 +0000 (0:00:00.221) 0:00:30.410 ********* 2025-07-12 20:09:54.676912 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.676923 | orchestrator | 2025-07-12 20:09:54.676933 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.676944 | orchestrator | Saturday 12 July 2025 20:09:47 +0000 (0:00:00.200) 0:00:30.610 ********* 2025-07-12 20:09:54.676955 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.676967 | orchestrator | 2025-07-12 20:09:54.676983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.677003 | orchestrator | Saturday 12 July 2025 20:09:47 +0000 (0:00:00.190) 0:00:30.800 ********* 2025-07-12 20:09:54.677071 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf) 2025-07-12 20:09:54.677113 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf) 2025-07-12 20:09:54.677134 | orchestrator | 2025-07-12 20:09:54.677154 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.677173 | orchestrator | Saturday 12 July 2025 20:09:48 +0000 (0:00:00.627) 0:00:31.428 ********* 2025-07-12 20:09:54.677187 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4) 2025-07-12 20:09:54.677200 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4) 2025-07-12 20:09:54.677213 | orchestrator | 2025-07-12 20:09:54.677225 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.677237 | orchestrator | Saturday 12 July 2025 20:09:49 +0000 (0:00:00.958) 0:00:32.386 ********* 2025-07-12 20:09:54.677250 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346) 2025-07-12 20:09:54.677262 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346) 2025-07-12 20:09:54.677274 | orchestrator | 2025-07-12 20:09:54.677286 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.677299 | orchestrator | Saturday 12 July 2025 20:09:49 +0000 (0:00:00.452) 0:00:32.838 ********* 2025-07-12 20:09:54.677311 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6) 2025-07-12 20:09:54.677324 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6) 2025-07-12 20:09:54.677334 | orchestrator | 2025-07-12 20:09:54.677345 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:09:54.677356 | orchestrator | Saturday 12 July 2025 20:09:50 +0000 (0:00:00.451) 0:00:33.290 ********* 2025-07-12 20:09:54.677366 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 20:09:54.677377 | orchestrator | 2025-07-12 20:09:54.677387 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.677398 | orchestrator | Saturday 12 July 2025 20:09:50 +0000 (0:00:00.345) 0:00:33.636 ********* 2025-07-12 20:09:54.677415 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-12 20:09:54.677433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-12 20:09:54.677452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-12 20:09:54.677470 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-12 20:09:54.677488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-12 20:09:54.677528 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-12 20:09:54.677549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-12 20:09:54.677568 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-12 20:09:54.677581 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-12 20:09:54.677591 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-12 20:09:54.677602 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-12 20:09:54.677612 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-12 20:09:54.677623 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-12 20:09:54.677643 | orchestrator | 2025-07-12 20:09:54.677654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.677665 | orchestrator | Saturday 12 July 2025 20:09:51 +0000 (0:00:00.379) 0:00:34.016 ********* 2025-07-12 20:09:54.677676 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.677687 | orchestrator | 2025-07-12 20:09:54.677697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.677708 | orchestrator | Saturday 12 July 2025 20:09:51 +0000 (0:00:00.267) 0:00:34.283 ********* 2025-07-12 20:09:54.677718 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.677729 | orchestrator | 2025-07-12 20:09:54.677739 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.677750 | orchestrator | Saturday 12 July 2025 20:09:51 +0000 (0:00:00.187) 0:00:34.470 ********* 2025-07-12 20:09:54.677761 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.677771 | orchestrator | 2025-07-12 20:09:54.677782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.677793 | orchestrator | Saturday 12 July 2025 20:09:51 +0000 (0:00:00.206) 0:00:34.677 ********* 2025-07-12 20:09:54.677803 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.677814 | orchestrator | 2025-07-12 20:09:54.677824 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.677835 | orchestrator | Saturday 12 July 2025 20:09:51 +0000 (0:00:00.207) 0:00:34.885 ********* 2025-07-12 20:09:54.677845 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.677856 | orchestrator | 2025-07-12 20:09:54.677866 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.677877 | orchestrator | Saturday 12 July 2025 20:09:52 +0000 (0:00:00.270) 0:00:35.155 ********* 2025-07-12 20:09:54.677887 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.677898 | orchestrator | 2025-07-12 20:09:54.677909 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.677919 | orchestrator | Saturday 12 July 2025 20:09:52 +0000 (0:00:00.235) 0:00:35.390 ********* 2025-07-12 20:09:54.677930 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.677940 | orchestrator | 2025-07-12 20:09:54.677951 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.677961 | orchestrator | Saturday 12 July 2025 20:09:52 +0000 (0:00:00.507) 0:00:35.898 ********* 2025-07-12 20:09:54.677972 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.677982 | orchestrator | 2025-07-12 20:09:54.677993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.678004 | orchestrator | Saturday 12 July 2025 20:09:53 +0000 (0:00:00.178) 0:00:36.077 ********* 2025-07-12 20:09:54.678014 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-12 20:09:54.678115 | orchestrator | 2025-07-12 20:09:54.678127 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.678137 | orchestrator | Saturday 12 July 2025 20:09:53 +0000 (0:00:00.345) 0:00:36.422 ********* 2025-07-12 20:09:54.678148 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.678158 | orchestrator | 2025-07-12 20:09:54.678169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.678179 | orchestrator | Saturday 12 July 2025 20:09:53 +0000 (0:00:00.217) 0:00:36.640 ********* 2025-07-12 20:09:54.678190 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.678200 | orchestrator | 2025-07-12 20:09:54.678211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.678221 | orchestrator | Saturday 12 July 2025 20:09:53 +0000 (0:00:00.191) 0:00:36.832 ********* 2025-07-12 20:09:54.678232 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.678242 | orchestrator | 2025-07-12 20:09:54.678253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:09:54.678264 | orchestrator | Saturday 12 July 2025 20:09:54 +0000 (0:00:00.198) 0:00:37.030 ********* 2025-07-12 20:09:54.678285 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.678296 | orchestrator | 2025-07-12 20:09:54.678307 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-07-12 20:09:54.678317 | orchestrator | Saturday 12 July 2025 20:09:54 +0000 (0:00:00.182) 0:00:37.213 ********* 2025-07-12 20:09:54.678328 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-07-12 20:09:54.678338 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-07-12 20:09:54.678349 | orchestrator | 2025-07-12 20:09:54.678359 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-07-12 20:09:54.678377 | orchestrator | Saturday 12 July 2025 20:09:54 +0000 (0:00:00.135) 0:00:37.348 ********* 2025-07-12 20:09:54.678388 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.678398 | orchestrator | 2025-07-12 20:09:54.678409 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-07-12 20:09:54.678419 | orchestrator | Saturday 12 July 2025 20:09:54 +0000 (0:00:00.120) 0:00:37.468 ********* 2025-07-12 20:09:54.678430 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:54.678440 | orchestrator | 2025-07-12 20:09:54.678460 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-07-12 20:09:58.198257 | orchestrator | Saturday 12 July 2025 20:09:54 +0000 (0:00:00.132) 0:00:37.600 ********* 2025-07-12 20:09:58.198353 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:58.198369 | orchestrator | 2025-07-12 20:09:58.198382 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-07-12 20:09:58.198394 | orchestrator | Saturday 12 July 2025 20:09:54 +0000 (0:00:00.113) 0:00:37.714 ********* 2025-07-12 20:09:58.198405 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:09:58.198416 | orchestrator | 2025-07-12 20:09:58.198427 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-07-12 20:09:58.198438 | orchestrator | Saturday 12 July 2025 20:09:54 +0000 (0:00:00.135) 0:00:37.849 ********* 2025-07-12 20:09:58.198449 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3d06229f-4e10-52c4-b396-8cb508609dff'}}) 2025-07-12 20:09:58.198461 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '81820e8a-af8a-5909-b466-981a4bed2414'}}) 2025-07-12 20:09:58.198471 | orchestrator | 2025-07-12 20:09:58.198482 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-07-12 20:09:58.198493 | orchestrator | Saturday 12 July 2025 20:09:55 +0000 (0:00:00.300) 0:00:38.149 ********* 2025-07-12 20:09:58.198505 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3d06229f-4e10-52c4-b396-8cb508609dff'}})  2025-07-12 20:09:58.198517 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '81820e8a-af8a-5909-b466-981a4bed2414'}})  2025-07-12 20:09:58.198528 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:58.198539 | orchestrator | 2025-07-12 20:09:58.198550 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-07-12 20:09:58.198562 | orchestrator | Saturday 12 July 2025 20:09:55 +0000 (0:00:00.145) 0:00:38.295 ********* 2025-07-12 20:09:58.198573 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3d06229f-4e10-52c4-b396-8cb508609dff'}})  2025-07-12 20:09:58.198584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '81820e8a-af8a-5909-b466-981a4bed2414'}})  2025-07-12 20:09:58.198595 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:58.198606 | orchestrator | 2025-07-12 20:09:58.198636 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-07-12 20:09:58.198657 | orchestrator | Saturday 12 July 2025 20:09:55 +0000 (0:00:00.134) 0:00:38.430 ********* 2025-07-12 20:09:58.198677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3d06229f-4e10-52c4-b396-8cb508609dff'}})  2025-07-12 20:09:58.198697 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '81820e8a-af8a-5909-b466-981a4bed2414'}})  2025-07-12 20:09:58.198741 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:58.198752 | orchestrator | 2025-07-12 20:09:58.198763 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-07-12 20:09:58.198773 | orchestrator | Saturday 12 July 2025 20:09:55 +0000 (0:00:00.123) 0:00:38.554 ********* 2025-07-12 20:09:58.198784 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:09:58.198795 | orchestrator | 2025-07-12 20:09:58.198806 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-07-12 20:09:58.198818 | orchestrator | Saturday 12 July 2025 20:09:55 +0000 (0:00:00.113) 0:00:38.667 ********* 2025-07-12 20:09:58.198831 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:09:58.198843 | orchestrator | 2025-07-12 20:09:58.198855 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-07-12 20:09:58.198868 | orchestrator | Saturday 12 July 2025 20:09:55 +0000 (0:00:00.152) 0:00:38.820 ********* 2025-07-12 20:09:58.198880 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:58.198892 | orchestrator | 2025-07-12 20:09:58.198905 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-07-12 20:09:58.198918 | orchestrator | Saturday 12 July 2025 20:09:56 +0000 (0:00:00.120) 0:00:38.941 ********* 2025-07-12 20:09:58.198930 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:58.198942 | orchestrator | 2025-07-12 20:09:58.198954 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-07-12 20:09:58.198967 | orchestrator | Saturday 12 July 2025 20:09:56 +0000 (0:00:00.128) 0:00:39.069 ********* 2025-07-12 20:09:58.198979 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:58.198991 | orchestrator | 2025-07-12 20:09:58.199004 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-07-12 20:09:58.199016 | orchestrator | Saturday 12 July 2025 20:09:56 +0000 (0:00:00.105) 0:00:39.174 ********* 2025-07-12 20:09:58.199028 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 20:09:58.199041 | orchestrator |  "ceph_osd_devices": { 2025-07-12 20:09:58.199080 | orchestrator |  "sdb": { 2025-07-12 20:09:58.199093 | orchestrator |  "osd_lvm_uuid": "3d06229f-4e10-52c4-b396-8cb508609dff" 2025-07-12 20:09:58.199105 | orchestrator |  }, 2025-07-12 20:09:58.199117 | orchestrator |  "sdc": { 2025-07-12 20:09:58.199129 | orchestrator |  "osd_lvm_uuid": "81820e8a-af8a-5909-b466-981a4bed2414" 2025-07-12 20:09:58.199141 | orchestrator |  } 2025-07-12 20:09:58.199153 | orchestrator |  } 2025-07-12 20:09:58.199166 | orchestrator | } 2025-07-12 20:09:58.199178 | orchestrator | 2025-07-12 20:09:58.199189 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-07-12 20:09:58.199200 | orchestrator | Saturday 12 July 2025 20:09:56 +0000 (0:00:00.113) 0:00:39.287 ********* 2025-07-12 20:09:58.199210 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:58.199221 | orchestrator | 2025-07-12 20:09:58.199231 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-07-12 20:09:58.199242 | orchestrator | Saturday 12 July 2025 20:09:56 +0000 (0:00:00.107) 0:00:39.394 ********* 2025-07-12 20:09:58.199253 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:58.199263 | orchestrator | 2025-07-12 20:09:58.199291 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-07-12 20:09:58.199303 | orchestrator | Saturday 12 July 2025 20:09:56 +0000 (0:00:00.106) 0:00:39.501 ********* 2025-07-12 20:09:58.199314 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:09:58.199325 | orchestrator | 2025-07-12 20:09:58.199335 | orchestrator | TASK [Print configuration data] ************************************************ 2025-07-12 20:09:58.199346 | orchestrator | Saturday 12 July 2025 20:09:56 +0000 (0:00:00.277) 0:00:39.778 ********* 2025-07-12 20:09:58.199357 | orchestrator | changed: [testbed-node-5] => { 2025-07-12 20:09:58.199367 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-07-12 20:09:58.199378 | orchestrator |  "ceph_osd_devices": { 2025-07-12 20:09:58.199389 | orchestrator |  "sdb": { 2025-07-12 20:09:58.199407 | orchestrator |  "osd_lvm_uuid": "3d06229f-4e10-52c4-b396-8cb508609dff" 2025-07-12 20:09:58.199418 | orchestrator |  }, 2025-07-12 20:09:58.199428 | orchestrator |  "sdc": { 2025-07-12 20:09:58.199439 | orchestrator |  "osd_lvm_uuid": "81820e8a-af8a-5909-b466-981a4bed2414" 2025-07-12 20:09:58.199450 | orchestrator |  } 2025-07-12 20:09:58.199460 | orchestrator |  }, 2025-07-12 20:09:58.199471 | orchestrator |  "lvm_volumes": [ 2025-07-12 20:09:58.199482 | orchestrator |  { 2025-07-12 20:09:58.199492 | orchestrator |  "data": "osd-block-3d06229f-4e10-52c4-b396-8cb508609dff", 2025-07-12 20:09:58.199503 | orchestrator |  "data_vg": "ceph-3d06229f-4e10-52c4-b396-8cb508609dff" 2025-07-12 20:09:58.199513 | orchestrator |  }, 2025-07-12 20:09:58.199524 | orchestrator |  { 2025-07-12 20:09:58.199535 | orchestrator |  "data": "osd-block-81820e8a-af8a-5909-b466-981a4bed2414", 2025-07-12 20:09:58.199545 | orchestrator |  "data_vg": "ceph-81820e8a-af8a-5909-b466-981a4bed2414" 2025-07-12 20:09:58.199556 | orchestrator |  } 2025-07-12 20:09:58.199567 | orchestrator |  ] 2025-07-12 20:09:58.199577 | orchestrator |  } 2025-07-12 20:09:58.199588 | orchestrator | } 2025-07-12 20:09:58.199599 | orchestrator | 2025-07-12 20:09:58.199610 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-07-12 20:09:58.199621 | orchestrator | Saturday 12 July 2025 20:09:57 +0000 (0:00:00.190) 0:00:39.969 ********* 2025-07-12 20:09:58.199631 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 20:09:58.199642 | orchestrator | 2025-07-12 20:09:58.199653 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:09:58.199664 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 20:09:58.199676 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 20:09:58.199687 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 20:09:58.199708 | orchestrator | 2025-07-12 20:09:58.199721 | orchestrator | 2025-07-12 20:09:58.199732 | orchestrator | 2025-07-12 20:09:58.199751 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:09:58.199762 | orchestrator | Saturday 12 July 2025 20:09:57 +0000 (0:00:00.890) 0:00:40.859 ********* 2025-07-12 20:09:58.199773 | orchestrator | =============================================================================== 2025-07-12 20:09:58.199784 | orchestrator | Write configuration file ------------------------------------------------ 3.89s 2025-07-12 20:09:58.199794 | orchestrator | Add known links to the list of available block devices ------------------ 1.72s 2025-07-12 20:09:58.199805 | orchestrator | Add known partitions to the list of available block devices ------------- 1.16s 2025-07-12 20:09:58.199815 | orchestrator | Add known links to the list of available block devices ------------------ 0.96s 2025-07-12 20:09:58.199826 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-07-12 20:09:58.199836 | orchestrator | Add known partitions to the list of available block devices ------------- 0.74s 2025-07-12 20:09:58.199847 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-07-12 20:09:58.199857 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.71s 2025-07-12 20:09:58.199868 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-07-12 20:09:58.199878 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-07-12 20:09:58.199889 | orchestrator | Add known links to the list of available block devices ------------------ 0.63s 2025-07-12 20:09:58.199899 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.61s 2025-07-12 20:09:58.199916 | orchestrator | Print configuration data ------------------------------------------------ 0.61s 2025-07-12 20:09:58.199927 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.61s 2025-07-12 20:09:58.199938 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.59s 2025-07-12 20:09:58.199948 | orchestrator | Add known links to the list of available block devices ------------------ 0.58s 2025-07-12 20:09:58.199958 | orchestrator | Generate lvm_volumes structure (block + db + wal) ----------------------- 0.56s 2025-07-12 20:09:58.199976 | orchestrator | Add known partitions to the list of available block devices ------------- 0.54s 2025-07-12 20:09:58.199988 | orchestrator | Add known partitions to the list of available block devices ------------- 0.54s 2025-07-12 20:09:58.199999 | orchestrator | Print shared DB/WAL devices --------------------------------------------- 0.54s 2025-07-12 20:10:20.501498 | orchestrator | 2025-07-12 20:10:20 | INFO  | Task d798b6b8-2e12-4457-994e-2a8ef9d47553 (sync inventory) is running in background. Output coming soon. 2025-07-12 20:10:39.112299 | orchestrator | 2025-07-12 20:10:21 | INFO  | Starting group_vars file reorganization 2025-07-12 20:10:39.112367 | orchestrator | 2025-07-12 20:10:21 | INFO  | Moved 0 file(s) to their respective directories 2025-07-12 20:10:39.112375 | orchestrator | 2025-07-12 20:10:21 | INFO  | Group_vars file reorganization completed 2025-07-12 20:10:39.112380 | orchestrator | 2025-07-12 20:10:23 | INFO  | Starting variable preparation from inventory 2025-07-12 20:10:39.112386 | orchestrator | 2025-07-12 20:10:25 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-07-12 20:10:39.112391 | orchestrator | 2025-07-12 20:10:25 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-07-12 20:10:39.112396 | orchestrator | 2025-07-12 20:10:25 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-07-12 20:10:39.112402 | orchestrator | 2025-07-12 20:10:25 | INFO  | 3 file(s) written, 6 host(s) processed 2025-07-12 20:10:39.112407 | orchestrator | 2025-07-12 20:10:25 | INFO  | Variable preparation completed 2025-07-12 20:10:39.112412 | orchestrator | 2025-07-12 20:10:26 | INFO  | Starting inventory overwrite handling 2025-07-12 20:10:39.112417 | orchestrator | 2025-07-12 20:10:26 | INFO  | Handling group overwrites in 99-overwrite 2025-07-12 20:10:39.112422 | orchestrator | 2025-07-12 20:10:26 | INFO  | Removing group frr:children from 60-generic 2025-07-12 20:10:39.112427 | orchestrator | 2025-07-12 20:10:26 | INFO  | Removing group storage:children from 50-kolla 2025-07-12 20:10:39.112432 | orchestrator | 2025-07-12 20:10:26 | INFO  | Removing group netbird:children from 50-infrastruture 2025-07-12 20:10:39.112438 | orchestrator | 2025-07-12 20:10:26 | INFO  | Removing group ceph-rgw from 50-ceph 2025-07-12 20:10:39.112443 | orchestrator | 2025-07-12 20:10:26 | INFO  | Removing group ceph-mds from 50-ceph 2025-07-12 20:10:39.112448 | orchestrator | 2025-07-12 20:10:26 | INFO  | Handling group overwrites in 20-roles 2025-07-12 20:10:39.112454 | orchestrator | 2025-07-12 20:10:26 | INFO  | Removing group k3s_node from 50-infrastruture 2025-07-12 20:10:39.112459 | orchestrator | 2025-07-12 20:10:26 | INFO  | Removed 6 group(s) in total 2025-07-12 20:10:39.112464 | orchestrator | 2025-07-12 20:10:26 | INFO  | Inventory overwrite handling completed 2025-07-12 20:10:39.112470 | orchestrator | 2025-07-12 20:10:27 | INFO  | Starting merge of inventory files 2025-07-12 20:10:39.112475 | orchestrator | 2025-07-12 20:10:27 | INFO  | Inventory files merged successfully 2025-07-12 20:10:39.112480 | orchestrator | 2025-07-12 20:10:31 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-07-12 20:10:39.112499 | orchestrator | 2025-07-12 20:10:37 | INFO  | Successfully wrote ClusterShell configuration 2025-07-12 20:10:39.112505 | orchestrator | [master a020019] 2025-07-12-20-10 2025-07-12 20:10:39.112511 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-07-12 20:10:40.980213 | orchestrator | 2025-07-12 20:10:40 | INFO  | Task 46eaa8a2-6eae-4bb9-9d19-d65bb0c76155 (ceph-create-lvm-devices) was prepared for execution. 2025-07-12 20:10:40.980299 | orchestrator | 2025-07-12 20:10:40 | INFO  | It takes a moment until task 46eaa8a2-6eae-4bb9-9d19-d65bb0c76155 (ceph-create-lvm-devices) has been started and output is visible here. 2025-07-12 20:10:51.743813 | orchestrator | 2025-07-12 20:10:51.743909 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 20:10:51.743924 | orchestrator | 2025-07-12 20:10:51.743936 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 20:10:51.743947 | orchestrator | Saturday 12 July 2025 20:10:44 +0000 (0:00:00.284) 0:00:00.284 ********* 2025-07-12 20:10:51.743958 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:10:51.743969 | orchestrator | 2025-07-12 20:10:51.743980 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 20:10:51.743991 | orchestrator | Saturday 12 July 2025 20:10:45 +0000 (0:00:00.213) 0:00:00.497 ********* 2025-07-12 20:10:51.744002 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:10:51.744013 | orchestrator | 2025-07-12 20:10:51.744024 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744035 | orchestrator | Saturday 12 July 2025 20:10:45 +0000 (0:00:00.198) 0:00:00.696 ********* 2025-07-12 20:10:51.744046 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-07-12 20:10:51.744057 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-07-12 20:10:51.744102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-07-12 20:10:51.744113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-07-12 20:10:51.744123 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-07-12 20:10:51.744134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-07-12 20:10:51.744145 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-07-12 20:10:51.744156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-07-12 20:10:51.744166 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-07-12 20:10:51.744177 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-07-12 20:10:51.744187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-07-12 20:10:51.744199 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-07-12 20:10:51.744210 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-07-12 20:10:51.744221 | orchestrator | 2025-07-12 20:10:51.744231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744242 | orchestrator | Saturday 12 July 2025 20:10:45 +0000 (0:00:00.368) 0:00:01.065 ********* 2025-07-12 20:10:51.744253 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.744264 | orchestrator | 2025-07-12 20:10:51.744275 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744285 | orchestrator | Saturday 12 July 2025 20:10:45 +0000 (0:00:00.384) 0:00:01.449 ********* 2025-07-12 20:10:51.744296 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.744307 | orchestrator | 2025-07-12 20:10:51.744318 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744329 | orchestrator | Saturday 12 July 2025 20:10:46 +0000 (0:00:00.185) 0:00:01.634 ********* 2025-07-12 20:10:51.744363 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.744377 | orchestrator | 2025-07-12 20:10:51.744389 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744416 | orchestrator | Saturday 12 July 2025 20:10:46 +0000 (0:00:00.171) 0:00:01.806 ********* 2025-07-12 20:10:51.744429 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.744441 | orchestrator | 2025-07-12 20:10:51.744453 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744466 | orchestrator | Saturday 12 July 2025 20:10:46 +0000 (0:00:00.200) 0:00:02.007 ********* 2025-07-12 20:10:51.744478 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.744490 | orchestrator | 2025-07-12 20:10:51.744507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744519 | orchestrator | Saturday 12 July 2025 20:10:46 +0000 (0:00:00.179) 0:00:02.186 ********* 2025-07-12 20:10:51.744532 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.744544 | orchestrator | 2025-07-12 20:10:51.744555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744567 | orchestrator | Saturday 12 July 2025 20:10:46 +0000 (0:00:00.161) 0:00:02.347 ********* 2025-07-12 20:10:51.744579 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.744591 | orchestrator | 2025-07-12 20:10:51.744603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744615 | orchestrator | Saturday 12 July 2025 20:10:47 +0000 (0:00:00.162) 0:00:02.510 ********* 2025-07-12 20:10:51.744627 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.744639 | orchestrator | 2025-07-12 20:10:51.744651 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744663 | orchestrator | Saturday 12 July 2025 20:10:47 +0000 (0:00:00.185) 0:00:02.695 ********* 2025-07-12 20:10:51.744675 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0) 2025-07-12 20:10:51.744688 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0) 2025-07-12 20:10:51.744700 | orchestrator | 2025-07-12 20:10:51.744712 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744724 | orchestrator | Saturday 12 July 2025 20:10:47 +0000 (0:00:00.400) 0:00:03.095 ********* 2025-07-12 20:10:51.744751 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1) 2025-07-12 20:10:51.744763 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1) 2025-07-12 20:10:51.744774 | orchestrator | 2025-07-12 20:10:51.744785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744796 | orchestrator | Saturday 12 July 2025 20:10:47 +0000 (0:00:00.345) 0:00:03.441 ********* 2025-07-12 20:10:51.744806 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a) 2025-07-12 20:10:51.744817 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a) 2025-07-12 20:10:51.744828 | orchestrator | 2025-07-12 20:10:51.744839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744849 | orchestrator | Saturday 12 July 2025 20:10:48 +0000 (0:00:00.495) 0:00:03.937 ********* 2025-07-12 20:10:51.744860 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2) 2025-07-12 20:10:51.744871 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2) 2025-07-12 20:10:51.744882 | orchestrator | 2025-07-12 20:10:51.744893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:10:51.744904 | orchestrator | Saturday 12 July 2025 20:10:49 +0000 (0:00:00.572) 0:00:04.510 ********* 2025-07-12 20:10:51.744914 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 20:10:51.744932 | orchestrator | 2025-07-12 20:10:51.744943 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:51.744954 | orchestrator | Saturday 12 July 2025 20:10:49 +0000 (0:00:00.641) 0:00:05.151 ********* 2025-07-12 20:10:51.744964 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-07-12 20:10:51.744975 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-07-12 20:10:51.744985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-07-12 20:10:51.744996 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-07-12 20:10:51.745007 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-07-12 20:10:51.745017 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-07-12 20:10:51.745028 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-07-12 20:10:51.745038 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-07-12 20:10:51.745049 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-07-12 20:10:51.745074 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-07-12 20:10:51.745086 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-07-12 20:10:51.745096 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-07-12 20:10:51.745107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-07-12 20:10:51.745118 | orchestrator | 2025-07-12 20:10:51.745128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:51.745139 | orchestrator | Saturday 12 July 2025 20:10:50 +0000 (0:00:00.456) 0:00:05.608 ********* 2025-07-12 20:10:51.745149 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.745420 | orchestrator | 2025-07-12 20:10:51.745449 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:51.745468 | orchestrator | Saturday 12 July 2025 20:10:50 +0000 (0:00:00.201) 0:00:05.810 ********* 2025-07-12 20:10:51.745486 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.745505 | orchestrator | 2025-07-12 20:10:51.745523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:51.745542 | orchestrator | Saturday 12 July 2025 20:10:50 +0000 (0:00:00.176) 0:00:05.986 ********* 2025-07-12 20:10:51.745560 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.745577 | orchestrator | 2025-07-12 20:10:51.745595 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:51.745613 | orchestrator | Saturday 12 July 2025 20:10:50 +0000 (0:00:00.214) 0:00:06.201 ********* 2025-07-12 20:10:51.745632 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.745651 | orchestrator | 2025-07-12 20:10:51.745669 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:51.745688 | orchestrator | Saturday 12 July 2025 20:10:50 +0000 (0:00:00.233) 0:00:06.435 ********* 2025-07-12 20:10:51.745699 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.745710 | orchestrator | 2025-07-12 20:10:51.745720 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:51.745730 | orchestrator | Saturday 12 July 2025 20:10:51 +0000 (0:00:00.233) 0:00:06.668 ********* 2025-07-12 20:10:51.745741 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.745751 | orchestrator | 2025-07-12 20:10:51.745762 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:51.745772 | orchestrator | Saturday 12 July 2025 20:10:51 +0000 (0:00:00.176) 0:00:06.845 ********* 2025-07-12 20:10:51.745794 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:51.745805 | orchestrator | 2025-07-12 20:10:51.745815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:51.745826 | orchestrator | Saturday 12 July 2025 20:10:51 +0000 (0:00:00.175) 0:00:07.020 ********* 2025-07-12 20:10:51.745847 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962309 | orchestrator | 2025-07-12 20:10:58.962374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:58.962382 | orchestrator | Saturday 12 July 2025 20:10:51 +0000 (0:00:00.182) 0:00:07.203 ********* 2025-07-12 20:10:58.962389 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-07-12 20:10:58.962396 | orchestrator | 2025-07-12 20:10:58.962403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:58.962409 | orchestrator | Saturday 12 July 2025 20:10:52 +0000 (0:00:00.432) 0:00:07.635 ********* 2025-07-12 20:10:58.962416 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962422 | orchestrator | 2025-07-12 20:10:58.962428 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:58.962435 | orchestrator | Saturday 12 July 2025 20:10:52 +0000 (0:00:00.464) 0:00:08.100 ********* 2025-07-12 20:10:58.962441 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962447 | orchestrator | 2025-07-12 20:10:58.962454 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:58.962460 | orchestrator | Saturday 12 July 2025 20:10:52 +0000 (0:00:00.188) 0:00:08.288 ********* 2025-07-12 20:10:58.962466 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962472 | orchestrator | 2025-07-12 20:10:58.962478 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:10:58.962485 | orchestrator | Saturday 12 July 2025 20:10:53 +0000 (0:00:00.212) 0:00:08.500 ********* 2025-07-12 20:10:58.962491 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962497 | orchestrator | 2025-07-12 20:10:58.962503 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 20:10:58.962509 | orchestrator | Saturday 12 July 2025 20:10:53 +0000 (0:00:00.215) 0:00:08.716 ********* 2025-07-12 20:10:58.962515 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962521 | orchestrator | 2025-07-12 20:10:58.962527 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 20:10:58.962533 | orchestrator | Saturday 12 July 2025 20:10:53 +0000 (0:00:00.123) 0:00:08.839 ********* 2025-07-12 20:10:58.962540 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a733058e-5b74-5553-b3bf-66d1cbf46d31'}}) 2025-07-12 20:10:58.962546 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '8d632655-ba67-5245-89a0-0cb971b00289'}}) 2025-07-12 20:10:58.962552 | orchestrator | 2025-07-12 20:10:58.962558 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 20:10:58.962565 | orchestrator | Saturday 12 July 2025 20:10:53 +0000 (0:00:00.176) 0:00:09.015 ********* 2025-07-12 20:10:58.962572 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'}) 2025-07-12 20:10:58.962579 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'}) 2025-07-12 20:10:58.962585 | orchestrator | 2025-07-12 20:10:58.962591 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 20:10:58.962597 | orchestrator | Saturday 12 July 2025 20:10:55 +0000 (0:00:01.850) 0:00:10.866 ********* 2025-07-12 20:10:58.962603 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:10:58.962610 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:10:58.962631 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962638 | orchestrator | 2025-07-12 20:10:58.962654 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 20:10:58.962661 | orchestrator | Saturday 12 July 2025 20:10:55 +0000 (0:00:00.163) 0:00:11.030 ********* 2025-07-12 20:10:58.962669 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'}) 2025-07-12 20:10:58.962675 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'}) 2025-07-12 20:10:58.962681 | orchestrator | 2025-07-12 20:10:58.962686 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 20:10:58.962692 | orchestrator | Saturday 12 July 2025 20:10:57 +0000 (0:00:01.452) 0:00:12.482 ********* 2025-07-12 20:10:58.962699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:10:58.962705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:10:58.962712 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962718 | orchestrator | 2025-07-12 20:10:58.962724 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 20:10:58.962730 | orchestrator | Saturday 12 July 2025 20:10:57 +0000 (0:00:00.152) 0:00:12.634 ********* 2025-07-12 20:10:58.962736 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962743 | orchestrator | 2025-07-12 20:10:58.962749 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 20:10:58.962756 | orchestrator | Saturday 12 July 2025 20:10:57 +0000 (0:00:00.128) 0:00:12.763 ********* 2025-07-12 20:10:58.962772 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:10:58.962779 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:10:58.962785 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962791 | orchestrator | 2025-07-12 20:10:58.962798 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 20:10:58.962804 | orchestrator | Saturday 12 July 2025 20:10:57 +0000 (0:00:00.144) 0:00:12.908 ********* 2025-07-12 20:10:58.962810 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962816 | orchestrator | 2025-07-12 20:10:58.962822 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 20:10:58.962828 | orchestrator | Saturday 12 July 2025 20:10:57 +0000 (0:00:00.277) 0:00:13.185 ********* 2025-07-12 20:10:58.962834 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:10:58.962841 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:10:58.962847 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962853 | orchestrator | 2025-07-12 20:10:58.962859 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 20:10:58.962866 | orchestrator | Saturday 12 July 2025 20:10:57 +0000 (0:00:00.150) 0:00:13.335 ********* 2025-07-12 20:10:58.962872 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962878 | orchestrator | 2025-07-12 20:10:58.962884 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 20:10:58.962890 | orchestrator | Saturday 12 July 2025 20:10:57 +0000 (0:00:00.128) 0:00:13.463 ********* 2025-07-12 20:10:58.962897 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:10:58.962908 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:10:58.962914 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962921 | orchestrator | 2025-07-12 20:10:58.962927 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 20:10:58.962934 | orchestrator | Saturday 12 July 2025 20:10:58 +0000 (0:00:00.118) 0:00:13.582 ********* 2025-07-12 20:10:58.962940 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:10:58.962947 | orchestrator | 2025-07-12 20:10:58.962953 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 20:10:58.962960 | orchestrator | Saturday 12 July 2025 20:10:58 +0000 (0:00:00.126) 0:00:13.708 ********* 2025-07-12 20:10:58.962966 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:10:58.962972 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:10:58.962980 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.962986 | orchestrator | 2025-07-12 20:10:58.962992 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 20:10:58.962999 | orchestrator | Saturday 12 July 2025 20:10:58 +0000 (0:00:00.157) 0:00:13.866 ********* 2025-07-12 20:10:58.963006 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:10:58.963015 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:10:58.963021 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.963028 | orchestrator | 2025-07-12 20:10:58.963034 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 20:10:58.963040 | orchestrator | Saturday 12 July 2025 20:10:58 +0000 (0:00:00.144) 0:00:14.010 ********* 2025-07-12 20:10:58.963047 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:10:58.963053 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:10:58.963083 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.963091 | orchestrator | 2025-07-12 20:10:58.963097 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 20:10:58.963104 | orchestrator | Saturday 12 July 2025 20:10:58 +0000 (0:00:00.151) 0:00:14.162 ********* 2025-07-12 20:10:58.963110 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.963117 | orchestrator | 2025-07-12 20:10:58.963123 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 20:10:58.963127 | orchestrator | Saturday 12 July 2025 20:10:58 +0000 (0:00:00.131) 0:00:14.294 ********* 2025-07-12 20:10:58.963131 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:10:58.963136 | orchestrator | 2025-07-12 20:10:58.963140 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 20:10:58.963148 | orchestrator | Saturday 12 July 2025 20:10:58 +0000 (0:00:00.124) 0:00:14.419 ********* 2025-07-12 20:11:04.858050 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858143 | orchestrator | 2025-07-12 20:11:04.858150 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 20:11:04.858156 | orchestrator | Saturday 12 July 2025 20:10:59 +0000 (0:00:00.131) 0:00:14.550 ********* 2025-07-12 20:11:04.858160 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 20:11:04.858165 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 20:11:04.858183 | orchestrator | } 2025-07-12 20:11:04.858187 | orchestrator | 2025-07-12 20:11:04.858192 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 20:11:04.858196 | orchestrator | Saturday 12 July 2025 20:10:59 +0000 (0:00:00.131) 0:00:14.682 ********* 2025-07-12 20:11:04.858200 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 20:11:04.858204 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 20:11:04.858208 | orchestrator | } 2025-07-12 20:11:04.858212 | orchestrator | 2025-07-12 20:11:04.858216 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 20:11:04.858221 | orchestrator | Saturday 12 July 2025 20:10:59 +0000 (0:00:00.274) 0:00:14.956 ********* 2025-07-12 20:11:04.858227 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 20:11:04.858235 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 20:11:04.858241 | orchestrator | } 2025-07-12 20:11:04.858248 | orchestrator | 2025-07-12 20:11:04.858255 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 20:11:04.858262 | orchestrator | Saturday 12 July 2025 20:10:59 +0000 (0:00:00.139) 0:00:15.096 ********* 2025-07-12 20:11:04.858269 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:11:04.858276 | orchestrator | 2025-07-12 20:11:04.858283 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 20:11:04.858289 | orchestrator | Saturday 12 July 2025 20:11:00 +0000 (0:00:00.650) 0:00:15.747 ********* 2025-07-12 20:11:04.858296 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:11:04.858303 | orchestrator | 2025-07-12 20:11:04.858310 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 20:11:04.858317 | orchestrator | Saturday 12 July 2025 20:11:00 +0000 (0:00:00.494) 0:00:16.241 ********* 2025-07-12 20:11:04.858324 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:11:04.858331 | orchestrator | 2025-07-12 20:11:04.858339 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 20:11:04.858346 | orchestrator | Saturday 12 July 2025 20:11:01 +0000 (0:00:00.507) 0:00:16.749 ********* 2025-07-12 20:11:04.858353 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:11:04.858360 | orchestrator | 2025-07-12 20:11:04.858367 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 20:11:04.858374 | orchestrator | Saturday 12 July 2025 20:11:01 +0000 (0:00:00.151) 0:00:16.900 ********* 2025-07-12 20:11:04.858381 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858388 | orchestrator | 2025-07-12 20:11:04.858395 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 20:11:04.858402 | orchestrator | Saturday 12 July 2025 20:11:01 +0000 (0:00:00.126) 0:00:17.026 ********* 2025-07-12 20:11:04.858410 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858417 | orchestrator | 2025-07-12 20:11:04.858424 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 20:11:04.858432 | orchestrator | Saturday 12 July 2025 20:11:01 +0000 (0:00:00.112) 0:00:17.139 ********* 2025-07-12 20:11:04.858439 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 20:11:04.858446 | orchestrator |  "vgs_report": { 2025-07-12 20:11:04.858454 | orchestrator |  "vg": [] 2025-07-12 20:11:04.858460 | orchestrator |  } 2025-07-12 20:11:04.858467 | orchestrator | } 2025-07-12 20:11:04.858474 | orchestrator | 2025-07-12 20:11:04.858480 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 20:11:04.858487 | orchestrator | Saturday 12 July 2025 20:11:01 +0000 (0:00:00.153) 0:00:17.292 ********* 2025-07-12 20:11:04.858494 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858501 | orchestrator | 2025-07-12 20:11:04.858508 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 20:11:04.858514 | orchestrator | Saturday 12 July 2025 20:11:01 +0000 (0:00:00.105) 0:00:17.398 ********* 2025-07-12 20:11:04.858522 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858529 | orchestrator | 2025-07-12 20:11:04.858536 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 20:11:04.858548 | orchestrator | Saturday 12 July 2025 20:11:02 +0000 (0:00:00.133) 0:00:17.532 ********* 2025-07-12 20:11:04.858552 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858557 | orchestrator | 2025-07-12 20:11:04.858564 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 20:11:04.858571 | orchestrator | Saturday 12 July 2025 20:11:02 +0000 (0:00:00.128) 0:00:17.660 ********* 2025-07-12 20:11:04.858578 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858585 | orchestrator | 2025-07-12 20:11:04.858592 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 20:11:04.858599 | orchestrator | Saturday 12 July 2025 20:11:02 +0000 (0:00:00.267) 0:00:17.928 ********* 2025-07-12 20:11:04.858606 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858612 | orchestrator | 2025-07-12 20:11:04.858619 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 20:11:04.858626 | orchestrator | Saturday 12 July 2025 20:11:02 +0000 (0:00:00.117) 0:00:18.046 ********* 2025-07-12 20:11:04.858633 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858640 | orchestrator | 2025-07-12 20:11:04.858647 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 20:11:04.858654 | orchestrator | Saturday 12 July 2025 20:11:02 +0000 (0:00:00.159) 0:00:18.206 ********* 2025-07-12 20:11:04.858660 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858668 | orchestrator | 2025-07-12 20:11:04.858675 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 20:11:04.858682 | orchestrator | Saturday 12 July 2025 20:11:02 +0000 (0:00:00.113) 0:00:18.319 ********* 2025-07-12 20:11:04.858705 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858717 | orchestrator | 2025-07-12 20:11:04.858730 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 20:11:04.858743 | orchestrator | Saturday 12 July 2025 20:11:02 +0000 (0:00:00.130) 0:00:18.449 ********* 2025-07-12 20:11:04.858769 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858782 | orchestrator | 2025-07-12 20:11:04.858796 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 20:11:04.858807 | orchestrator | Saturday 12 July 2025 20:11:03 +0000 (0:00:00.127) 0:00:18.577 ********* 2025-07-12 20:11:04.858819 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858832 | orchestrator | 2025-07-12 20:11:04.858845 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 20:11:04.858854 | orchestrator | Saturday 12 July 2025 20:11:03 +0000 (0:00:00.143) 0:00:18.720 ********* 2025-07-12 20:11:04.858861 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858868 | orchestrator | 2025-07-12 20:11:04.858878 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 20:11:04.858888 | orchestrator | Saturday 12 July 2025 20:11:03 +0000 (0:00:00.141) 0:00:18.862 ********* 2025-07-12 20:11:04.858895 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858902 | orchestrator | 2025-07-12 20:11:04.858909 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 20:11:04.858916 | orchestrator | Saturday 12 July 2025 20:11:03 +0000 (0:00:00.125) 0:00:18.987 ********* 2025-07-12 20:11:04.858923 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858930 | orchestrator | 2025-07-12 20:11:04.858937 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 20:11:04.858944 | orchestrator | Saturday 12 July 2025 20:11:03 +0000 (0:00:00.126) 0:00:19.114 ********* 2025-07-12 20:11:04.858951 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.858958 | orchestrator | 2025-07-12 20:11:04.858965 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 20:11:04.858972 | orchestrator | Saturday 12 July 2025 20:11:03 +0000 (0:00:00.129) 0:00:19.243 ********* 2025-07-12 20:11:04.858979 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:04.858993 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:04.858999 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.859007 | orchestrator | 2025-07-12 20:11:04.859014 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 20:11:04.859021 | orchestrator | Saturday 12 July 2025 20:11:03 +0000 (0:00:00.154) 0:00:19.398 ********* 2025-07-12 20:11:04.859029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:04.859036 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:04.859043 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.859049 | orchestrator | 2025-07-12 20:11:04.859056 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 20:11:04.859075 | orchestrator | Saturday 12 July 2025 20:11:04 +0000 (0:00:00.337) 0:00:19.736 ********* 2025-07-12 20:11:04.859082 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:04.859090 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:04.859097 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.859104 | orchestrator | 2025-07-12 20:11:04.859108 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 20:11:04.859113 | orchestrator | Saturday 12 July 2025 20:11:04 +0000 (0:00:00.150) 0:00:19.886 ********* 2025-07-12 20:11:04.859119 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:04.859124 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:04.859128 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.859132 | orchestrator | 2025-07-12 20:11:04.859136 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 20:11:04.859141 | orchestrator | Saturday 12 July 2025 20:11:04 +0000 (0:00:00.130) 0:00:20.016 ********* 2025-07-12 20:11:04.859148 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:04.859154 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:04.859161 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:04.859168 | orchestrator | 2025-07-12 20:11:04.859175 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 20:11:04.859182 | orchestrator | Saturday 12 July 2025 20:11:04 +0000 (0:00:00.149) 0:00:20.166 ********* 2025-07-12 20:11:04.859189 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:04.859200 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:10.006666 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:10.006775 | orchestrator | 2025-07-12 20:11:10.006792 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 20:11:10.006805 | orchestrator | Saturday 12 July 2025 20:11:04 +0000 (0:00:00.152) 0:00:20.318 ********* 2025-07-12 20:11:10.006816 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:10.006853 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:10.006865 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:10.006875 | orchestrator | 2025-07-12 20:11:10.006886 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 20:11:10.006897 | orchestrator | Saturday 12 July 2025 20:11:05 +0000 (0:00:00.150) 0:00:20.469 ********* 2025-07-12 20:11:10.006908 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:10.006919 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:10.006929 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:10.006940 | orchestrator | 2025-07-12 20:11:10.006950 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 20:11:10.006961 | orchestrator | Saturday 12 July 2025 20:11:05 +0000 (0:00:00.164) 0:00:20.633 ********* 2025-07-12 20:11:10.006971 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:11:10.006983 | orchestrator | 2025-07-12 20:11:10.006993 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 20:11:10.007004 | orchestrator | Saturday 12 July 2025 20:11:05 +0000 (0:00:00.494) 0:00:21.127 ********* 2025-07-12 20:11:10.007014 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:11:10.007025 | orchestrator | 2025-07-12 20:11:10.007035 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 20:11:10.007046 | orchestrator | Saturday 12 July 2025 20:11:06 +0000 (0:00:00.499) 0:00:21.627 ********* 2025-07-12 20:11:10.007056 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:11:10.007105 | orchestrator | 2025-07-12 20:11:10.007116 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 20:11:10.007127 | orchestrator | Saturday 12 July 2025 20:11:06 +0000 (0:00:00.149) 0:00:21.776 ********* 2025-07-12 20:11:10.007138 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'vg_name': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'}) 2025-07-12 20:11:10.007150 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'vg_name': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'}) 2025-07-12 20:11:10.007161 | orchestrator | 2025-07-12 20:11:10.007171 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 20:11:10.007182 | orchestrator | Saturday 12 July 2025 20:11:06 +0000 (0:00:00.202) 0:00:21.979 ********* 2025-07-12 20:11:10.007193 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:10.007206 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:10.007219 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:10.007231 | orchestrator | 2025-07-12 20:11:10.007243 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 20:11:10.007255 | orchestrator | Saturday 12 July 2025 20:11:06 +0000 (0:00:00.138) 0:00:22.118 ********* 2025-07-12 20:11:10.007268 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:10.007280 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:10.007293 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:10.007314 | orchestrator | 2025-07-12 20:11:10.007326 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 20:11:10.007338 | orchestrator | Saturday 12 July 2025 20:11:06 +0000 (0:00:00.290) 0:00:22.408 ********* 2025-07-12 20:11:10.007350 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'})  2025-07-12 20:11:10.007362 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'})  2025-07-12 20:11:10.007374 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:11:10.007386 | orchestrator | 2025-07-12 20:11:10.007398 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 20:11:10.007410 | orchestrator | Saturday 12 July 2025 20:11:07 +0000 (0:00:00.153) 0:00:22.562 ********* 2025-07-12 20:11:10.007422 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 20:11:10.007434 | orchestrator |  "lvm_report": { 2025-07-12 20:11:10.007446 | orchestrator |  "lv": [ 2025-07-12 20:11:10.007458 | orchestrator |  { 2025-07-12 20:11:10.007488 | orchestrator |  "lv_name": "osd-block-8d632655-ba67-5245-89a0-0cb971b00289", 2025-07-12 20:11:10.007502 | orchestrator |  "vg_name": "ceph-8d632655-ba67-5245-89a0-0cb971b00289" 2025-07-12 20:11:10.007513 | orchestrator |  }, 2025-07-12 20:11:10.007527 | orchestrator |  { 2025-07-12 20:11:10.007539 | orchestrator |  "lv_name": "osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31", 2025-07-12 20:11:10.007552 | orchestrator |  "vg_name": "ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31" 2025-07-12 20:11:10.007564 | orchestrator |  } 2025-07-12 20:11:10.007576 | orchestrator |  ], 2025-07-12 20:11:10.007587 | orchestrator |  "pv": [ 2025-07-12 20:11:10.007598 | orchestrator |  { 2025-07-12 20:11:10.007608 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 20:11:10.007619 | orchestrator |  "vg_name": "ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31" 2025-07-12 20:11:10.007630 | orchestrator |  }, 2025-07-12 20:11:10.007640 | orchestrator |  { 2025-07-12 20:11:10.007651 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 20:11:10.007661 | orchestrator |  "vg_name": "ceph-8d632655-ba67-5245-89a0-0cb971b00289" 2025-07-12 20:11:10.007672 | orchestrator |  } 2025-07-12 20:11:10.007682 | orchestrator |  ] 2025-07-12 20:11:10.007693 | orchestrator |  } 2025-07-12 20:11:10.007704 | orchestrator | } 2025-07-12 20:11:10.007714 | orchestrator | 2025-07-12 20:11:10.007725 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 20:11:10.007736 | orchestrator | 2025-07-12 20:11:10.007746 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 20:11:10.007757 | orchestrator | Saturday 12 July 2025 20:11:07 +0000 (0:00:00.261) 0:00:22.824 ********* 2025-07-12 20:11:10.007768 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-07-12 20:11:10.007778 | orchestrator | 2025-07-12 20:11:10.007789 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 20:11:10.007800 | orchestrator | Saturday 12 July 2025 20:11:07 +0000 (0:00:00.232) 0:00:23.056 ********* 2025-07-12 20:11:10.007810 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:11:10.007821 | orchestrator | 2025-07-12 20:11:10.007831 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:10.007842 | orchestrator | Saturday 12 July 2025 20:11:07 +0000 (0:00:00.227) 0:00:23.283 ********* 2025-07-12 20:11:10.007852 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-07-12 20:11:10.007863 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-07-12 20:11:10.007873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-07-12 20:11:10.007884 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-07-12 20:11:10.007902 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-07-12 20:11:10.007913 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-07-12 20:11:10.007941 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-07-12 20:11:10.007952 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-07-12 20:11:10.007962 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-07-12 20:11:10.007973 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-07-12 20:11:10.007983 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-07-12 20:11:10.007993 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-07-12 20:11:10.008004 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-07-12 20:11:10.008014 | orchestrator | 2025-07-12 20:11:10.008025 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:10.008041 | orchestrator | Saturday 12 July 2025 20:11:08 +0000 (0:00:00.393) 0:00:23.676 ********* 2025-07-12 20:11:10.008051 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:10.008088 | orchestrator | 2025-07-12 20:11:10.008101 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:10.008111 | orchestrator | Saturday 12 July 2025 20:11:08 +0000 (0:00:00.197) 0:00:23.874 ********* 2025-07-12 20:11:10.008122 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:10.008132 | orchestrator | 2025-07-12 20:11:10.008143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:10.008154 | orchestrator | Saturday 12 July 2025 20:11:08 +0000 (0:00:00.202) 0:00:24.077 ********* 2025-07-12 20:11:10.008165 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:10.008175 | orchestrator | 2025-07-12 20:11:10.008186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:10.008196 | orchestrator | Saturday 12 July 2025 20:11:08 +0000 (0:00:00.212) 0:00:24.289 ********* 2025-07-12 20:11:10.008207 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:10.008218 | orchestrator | 2025-07-12 20:11:10.008228 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:10.008239 | orchestrator | Saturday 12 July 2025 20:11:09 +0000 (0:00:00.550) 0:00:24.840 ********* 2025-07-12 20:11:10.008249 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:10.008260 | orchestrator | 2025-07-12 20:11:10.008270 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:10.008281 | orchestrator | Saturday 12 July 2025 20:11:09 +0000 (0:00:00.218) 0:00:25.058 ********* 2025-07-12 20:11:10.008292 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:10.008302 | orchestrator | 2025-07-12 20:11:10.008313 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:10.008323 | orchestrator | Saturday 12 July 2025 20:11:09 +0000 (0:00:00.192) 0:00:25.250 ********* 2025-07-12 20:11:10.008334 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:10.008345 | orchestrator | 2025-07-12 20:11:10.008362 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:20.875632 | orchestrator | Saturday 12 July 2025 20:11:09 +0000 (0:00:00.215) 0:00:25.465 ********* 2025-07-12 20:11:20.875727 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.875739 | orchestrator | 2025-07-12 20:11:20.875747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:20.875753 | orchestrator | Saturday 12 July 2025 20:11:10 +0000 (0:00:00.218) 0:00:25.683 ********* 2025-07-12 20:11:20.875760 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c) 2025-07-12 20:11:20.875847 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c) 2025-07-12 20:11:20.875856 | orchestrator | 2025-07-12 20:11:20.875862 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:20.875867 | orchestrator | Saturday 12 July 2025 20:11:10 +0000 (0:00:00.453) 0:00:26.137 ********* 2025-07-12 20:11:20.875874 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c) 2025-07-12 20:11:20.875881 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c) 2025-07-12 20:11:20.875887 | orchestrator | 2025-07-12 20:11:20.875893 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:20.875900 | orchestrator | Saturday 12 July 2025 20:11:11 +0000 (0:00:00.464) 0:00:26.602 ********* 2025-07-12 20:11:20.875906 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5) 2025-07-12 20:11:20.875912 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5) 2025-07-12 20:11:20.875918 | orchestrator | 2025-07-12 20:11:20.875924 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:20.875931 | orchestrator | Saturday 12 July 2025 20:11:11 +0000 (0:00:00.454) 0:00:27.057 ********* 2025-07-12 20:11:20.875936 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808) 2025-07-12 20:11:20.875940 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808) 2025-07-12 20:11:20.875943 | orchestrator | 2025-07-12 20:11:20.875947 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:20.875951 | orchestrator | Saturday 12 July 2025 20:11:12 +0000 (0:00:00.428) 0:00:27.485 ********* 2025-07-12 20:11:20.875954 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 20:11:20.875958 | orchestrator | 2025-07-12 20:11:20.875962 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.875965 | orchestrator | Saturday 12 July 2025 20:11:12 +0000 (0:00:00.336) 0:00:27.821 ********* 2025-07-12 20:11:20.875969 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-07-12 20:11:20.875974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-07-12 20:11:20.875977 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-07-12 20:11:20.875981 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-07-12 20:11:20.875985 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-07-12 20:11:20.875988 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-07-12 20:11:20.875992 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-07-12 20:11:20.876006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-07-12 20:11:20.876010 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-07-12 20:11:20.876014 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-07-12 20:11:20.876018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-07-12 20:11:20.876021 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-07-12 20:11:20.876025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-07-12 20:11:20.876028 | orchestrator | 2025-07-12 20:11:20.876032 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876036 | orchestrator | Saturday 12 July 2025 20:11:12 +0000 (0:00:00.412) 0:00:28.234 ********* 2025-07-12 20:11:20.876045 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876048 | orchestrator | 2025-07-12 20:11:20.876052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876056 | orchestrator | Saturday 12 July 2025 20:11:13 +0000 (0:00:00.709) 0:00:28.943 ********* 2025-07-12 20:11:20.876059 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876063 | orchestrator | 2025-07-12 20:11:20.876117 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876124 | orchestrator | Saturday 12 July 2025 20:11:13 +0000 (0:00:00.261) 0:00:29.205 ********* 2025-07-12 20:11:20.876130 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876135 | orchestrator | 2025-07-12 20:11:20.876141 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876146 | orchestrator | Saturday 12 July 2025 20:11:14 +0000 (0:00:00.269) 0:00:29.474 ********* 2025-07-12 20:11:20.876151 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876156 | orchestrator | 2025-07-12 20:11:20.876180 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876187 | orchestrator | Saturday 12 July 2025 20:11:14 +0000 (0:00:00.246) 0:00:29.721 ********* 2025-07-12 20:11:20.876194 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876199 | orchestrator | 2025-07-12 20:11:20.876204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876208 | orchestrator | Saturday 12 July 2025 20:11:14 +0000 (0:00:00.305) 0:00:30.026 ********* 2025-07-12 20:11:20.876212 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876216 | orchestrator | 2025-07-12 20:11:20.876221 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876225 | orchestrator | Saturday 12 July 2025 20:11:14 +0000 (0:00:00.218) 0:00:30.245 ********* 2025-07-12 20:11:20.876229 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876233 | orchestrator | 2025-07-12 20:11:20.876237 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876241 | orchestrator | Saturday 12 July 2025 20:11:14 +0000 (0:00:00.198) 0:00:30.443 ********* 2025-07-12 20:11:20.876245 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876250 | orchestrator | 2025-07-12 20:11:20.876254 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876258 | orchestrator | Saturday 12 July 2025 20:11:15 +0000 (0:00:00.201) 0:00:30.645 ********* 2025-07-12 20:11:20.876262 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-07-12 20:11:20.876267 | orchestrator | 2025-07-12 20:11:20.876271 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876275 | orchestrator | Saturday 12 July 2025 20:11:15 +0000 (0:00:00.381) 0:00:31.026 ********* 2025-07-12 20:11:20.876279 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876283 | orchestrator | 2025-07-12 20:11:20.876288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876292 | orchestrator | Saturday 12 July 2025 20:11:15 +0000 (0:00:00.257) 0:00:31.283 ********* 2025-07-12 20:11:20.876296 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876300 | orchestrator | 2025-07-12 20:11:20.876305 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876309 | orchestrator | Saturday 12 July 2025 20:11:16 +0000 (0:00:00.230) 0:00:31.513 ********* 2025-07-12 20:11:20.876313 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876317 | orchestrator | 2025-07-12 20:11:20.876321 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:20.876325 | orchestrator | Saturday 12 July 2025 20:11:16 +0000 (0:00:00.257) 0:00:31.771 ********* 2025-07-12 20:11:20.876330 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876334 | orchestrator | 2025-07-12 20:11:20.876338 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 20:11:20.876347 | orchestrator | Saturday 12 July 2025 20:11:16 +0000 (0:00:00.676) 0:00:32.447 ********* 2025-07-12 20:11:20.876351 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876356 | orchestrator | 2025-07-12 20:11:20.876360 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 20:11:20.876364 | orchestrator | Saturday 12 July 2025 20:11:17 +0000 (0:00:00.148) 0:00:32.596 ********* 2025-07-12 20:11:20.876368 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2ea885c-c09d-528a-8e30-9d64ecae89b3'}}) 2025-07-12 20:11:20.876373 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5037a2b3-768c-53ee-9f72-df4915d4fb6f'}}) 2025-07-12 20:11:20.876377 | orchestrator | 2025-07-12 20:11:20.876382 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 20:11:20.876386 | orchestrator | Saturday 12 July 2025 20:11:17 +0000 (0:00:00.214) 0:00:32.810 ********* 2025-07-12 20:11:20.876391 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'}) 2025-07-12 20:11:20.876397 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'}) 2025-07-12 20:11:20.876402 | orchestrator | 2025-07-12 20:11:20.876406 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 20:11:20.876411 | orchestrator | Saturday 12 July 2025 20:11:19 +0000 (0:00:01.839) 0:00:34.649 ********* 2025-07-12 20:11:20.876415 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:20.876421 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:20.876425 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:20.876429 | orchestrator | 2025-07-12 20:11:20.876434 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 20:11:20.876438 | orchestrator | Saturday 12 July 2025 20:11:19 +0000 (0:00:00.170) 0:00:34.820 ********* 2025-07-12 20:11:20.876442 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'}) 2025-07-12 20:11:20.876447 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'}) 2025-07-12 20:11:20.876451 | orchestrator | 2025-07-12 20:11:20.876455 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 20:11:20.876460 | orchestrator | Saturday 12 July 2025 20:11:20 +0000 (0:00:01.332) 0:00:36.152 ********* 2025-07-12 20:11:20.876467 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:26.101639 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:26.101774 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.101791 | orchestrator | 2025-07-12 20:11:26.101804 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 20:11:26.101828 | orchestrator | Saturday 12 July 2025 20:11:20 +0000 (0:00:00.181) 0:00:36.334 ********* 2025-07-12 20:11:26.101881 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.101894 | orchestrator | 2025-07-12 20:11:26.101906 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 20:11:26.101917 | orchestrator | Saturday 12 July 2025 20:11:21 +0000 (0:00:00.143) 0:00:36.478 ********* 2025-07-12 20:11:26.101928 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:26.101966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:26.101978 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.101988 | orchestrator | 2025-07-12 20:11:26.101999 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 20:11:26.102010 | orchestrator | Saturday 12 July 2025 20:11:21 +0000 (0:00:00.159) 0:00:36.637 ********* 2025-07-12 20:11:26.102117 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.102130 | orchestrator | 2025-07-12 20:11:26.102142 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 20:11:26.102153 | orchestrator | Saturday 12 July 2025 20:11:21 +0000 (0:00:00.126) 0:00:36.763 ********* 2025-07-12 20:11:26.102163 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:26.102191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:26.102205 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.102217 | orchestrator | 2025-07-12 20:11:26.102259 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 20:11:26.102271 | orchestrator | Saturday 12 July 2025 20:11:21 +0000 (0:00:00.147) 0:00:36.911 ********* 2025-07-12 20:11:26.102284 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.102295 | orchestrator | 2025-07-12 20:11:26.102307 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 20:11:26.102319 | orchestrator | Saturday 12 July 2025 20:11:21 +0000 (0:00:00.126) 0:00:37.037 ********* 2025-07-12 20:11:26.102331 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:26.102344 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:26.102356 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.102368 | orchestrator | 2025-07-12 20:11:26.102380 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 20:11:26.102392 | orchestrator | Saturday 12 July 2025 20:11:21 +0000 (0:00:00.299) 0:00:37.336 ********* 2025-07-12 20:11:26.102404 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:11:26.102417 | orchestrator | 2025-07-12 20:11:26.102434 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 20:11:26.102447 | orchestrator | Saturday 12 July 2025 20:11:22 +0000 (0:00:00.151) 0:00:37.487 ********* 2025-07-12 20:11:26.102458 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:26.102475 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:26.102492 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.102510 | orchestrator | 2025-07-12 20:11:26.102526 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 20:11:26.102542 | orchestrator | Saturday 12 July 2025 20:11:22 +0000 (0:00:00.139) 0:00:37.627 ********* 2025-07-12 20:11:26.102557 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:26.102573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:26.102591 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.102620 | orchestrator | 2025-07-12 20:11:26.102637 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 20:11:26.102654 | orchestrator | Saturday 12 July 2025 20:11:22 +0000 (0:00:00.124) 0:00:37.752 ********* 2025-07-12 20:11:26.102671 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:26.102714 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:26.102734 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.102752 | orchestrator | 2025-07-12 20:11:26.102771 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 20:11:26.102790 | orchestrator | Saturday 12 July 2025 20:11:22 +0000 (0:00:00.146) 0:00:37.898 ********* 2025-07-12 20:11:26.102805 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.102816 | orchestrator | 2025-07-12 20:11:26.102826 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 20:11:26.102837 | orchestrator | Saturday 12 July 2025 20:11:22 +0000 (0:00:00.117) 0:00:38.016 ********* 2025-07-12 20:11:26.102847 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.102858 | orchestrator | 2025-07-12 20:11:26.102868 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 20:11:26.102879 | orchestrator | Saturday 12 July 2025 20:11:22 +0000 (0:00:00.118) 0:00:38.134 ********* 2025-07-12 20:11:26.102889 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.102900 | orchestrator | 2025-07-12 20:11:26.102910 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 20:11:26.102921 | orchestrator | Saturday 12 July 2025 20:11:22 +0000 (0:00:00.140) 0:00:38.274 ********* 2025-07-12 20:11:26.102931 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 20:11:26.102941 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 20:11:26.102952 | orchestrator | } 2025-07-12 20:11:26.102963 | orchestrator | 2025-07-12 20:11:26.102973 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 20:11:26.102984 | orchestrator | Saturday 12 July 2025 20:11:22 +0000 (0:00:00.139) 0:00:38.414 ********* 2025-07-12 20:11:26.102994 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 20:11:26.103004 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 20:11:26.103015 | orchestrator | } 2025-07-12 20:11:26.103025 | orchestrator | 2025-07-12 20:11:26.103036 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 20:11:26.103046 | orchestrator | Saturday 12 July 2025 20:11:23 +0000 (0:00:00.143) 0:00:38.558 ********* 2025-07-12 20:11:26.103057 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 20:11:26.103094 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 20:11:26.103106 | orchestrator | } 2025-07-12 20:11:26.103116 | orchestrator | 2025-07-12 20:11:26.103127 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 20:11:26.103137 | orchestrator | Saturday 12 July 2025 20:11:23 +0000 (0:00:00.149) 0:00:38.707 ********* 2025-07-12 20:11:26.103148 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:11:26.103158 | orchestrator | 2025-07-12 20:11:26.103169 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 20:11:26.103179 | orchestrator | Saturday 12 July 2025 20:11:23 +0000 (0:00:00.471) 0:00:39.179 ********* 2025-07-12 20:11:26.103190 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:11:26.103200 | orchestrator | 2025-07-12 20:11:26.103211 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 20:11:26.103221 | orchestrator | Saturday 12 July 2025 20:11:24 +0000 (0:00:00.641) 0:00:39.820 ********* 2025-07-12 20:11:26.103232 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:11:26.103242 | orchestrator | 2025-07-12 20:11:26.103253 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 20:11:26.103263 | orchestrator | Saturday 12 July 2025 20:11:24 +0000 (0:00:00.523) 0:00:40.343 ********* 2025-07-12 20:11:26.103283 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:11:26.103294 | orchestrator | 2025-07-12 20:11:26.103304 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 20:11:26.103315 | orchestrator | Saturday 12 July 2025 20:11:25 +0000 (0:00:00.148) 0:00:40.492 ********* 2025-07-12 20:11:26.103325 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.103336 | orchestrator | 2025-07-12 20:11:26.103347 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 20:11:26.103364 | orchestrator | Saturday 12 July 2025 20:11:25 +0000 (0:00:00.125) 0:00:40.618 ********* 2025-07-12 20:11:26.103375 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.103385 | orchestrator | 2025-07-12 20:11:26.103396 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 20:11:26.103406 | orchestrator | Saturday 12 July 2025 20:11:25 +0000 (0:00:00.102) 0:00:40.721 ********* 2025-07-12 20:11:26.103417 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 20:11:26.103427 | orchestrator |  "vgs_report": { 2025-07-12 20:11:26.103438 | orchestrator |  "vg": [] 2025-07-12 20:11:26.103448 | orchestrator |  } 2025-07-12 20:11:26.103459 | orchestrator | } 2025-07-12 20:11:26.103469 | orchestrator | 2025-07-12 20:11:26.103480 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 20:11:26.103490 | orchestrator | Saturday 12 July 2025 20:11:25 +0000 (0:00:00.140) 0:00:40.861 ********* 2025-07-12 20:11:26.103500 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.103511 | orchestrator | 2025-07-12 20:11:26.103521 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 20:11:26.103531 | orchestrator | Saturday 12 July 2025 20:11:25 +0000 (0:00:00.162) 0:00:41.023 ********* 2025-07-12 20:11:26.103542 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.103552 | orchestrator | 2025-07-12 20:11:26.103563 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 20:11:26.103576 | orchestrator | Saturday 12 July 2025 20:11:25 +0000 (0:00:00.136) 0:00:41.160 ********* 2025-07-12 20:11:26.103594 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.103643 | orchestrator | 2025-07-12 20:11:26.103659 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 20:11:26.103675 | orchestrator | Saturday 12 July 2025 20:11:25 +0000 (0:00:00.139) 0:00:41.299 ********* 2025-07-12 20:11:26.103693 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.103712 | orchestrator | 2025-07-12 20:11:26.103730 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 20:11:26.103748 | orchestrator | Saturday 12 July 2025 20:11:25 +0000 (0:00:00.115) 0:00:41.415 ********* 2025-07-12 20:11:26.103766 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:26.103779 | orchestrator | 2025-07-12 20:11:26.103790 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 20:11:26.103812 | orchestrator | Saturday 12 July 2025 20:11:26 +0000 (0:00:00.140) 0:00:41.556 ********* 2025-07-12 20:11:31.048503 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.048608 | orchestrator | 2025-07-12 20:11:31.048625 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 20:11:31.048638 | orchestrator | Saturday 12 July 2025 20:11:26 +0000 (0:00:00.136) 0:00:41.692 ********* 2025-07-12 20:11:31.048649 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.048660 | orchestrator | 2025-07-12 20:11:31.048671 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 20:11:31.048682 | orchestrator | Saturday 12 July 2025 20:11:26 +0000 (0:00:00.373) 0:00:42.066 ********* 2025-07-12 20:11:31.048693 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.048703 | orchestrator | 2025-07-12 20:11:31.048714 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 20:11:31.048725 | orchestrator | Saturday 12 July 2025 20:11:26 +0000 (0:00:00.155) 0:00:42.221 ********* 2025-07-12 20:11:31.048735 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.048770 | orchestrator | 2025-07-12 20:11:31.048781 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 20:11:31.048792 | orchestrator | Saturday 12 July 2025 20:11:26 +0000 (0:00:00.162) 0:00:42.384 ********* 2025-07-12 20:11:31.048802 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.048813 | orchestrator | 2025-07-12 20:11:31.048823 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 20:11:31.048834 | orchestrator | Saturday 12 July 2025 20:11:27 +0000 (0:00:00.149) 0:00:42.533 ********* 2025-07-12 20:11:31.048845 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.048855 | orchestrator | 2025-07-12 20:11:31.048866 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 20:11:31.048876 | orchestrator | Saturday 12 July 2025 20:11:27 +0000 (0:00:00.170) 0:00:42.704 ********* 2025-07-12 20:11:31.048887 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.048898 | orchestrator | 2025-07-12 20:11:31.048908 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 20:11:31.048919 | orchestrator | Saturday 12 July 2025 20:11:27 +0000 (0:00:00.151) 0:00:42.856 ********* 2025-07-12 20:11:31.048930 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.048940 | orchestrator | 2025-07-12 20:11:31.048951 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 20:11:31.048961 | orchestrator | Saturday 12 July 2025 20:11:27 +0000 (0:00:00.154) 0:00:43.010 ********* 2025-07-12 20:11:31.048972 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.048983 | orchestrator | 2025-07-12 20:11:31.048993 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 20:11:31.049004 | orchestrator | Saturday 12 July 2025 20:11:27 +0000 (0:00:00.149) 0:00:43.160 ********* 2025-07-12 20:11:31.049016 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:31.049028 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:31.049041 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.049054 | orchestrator | 2025-07-12 20:11:31.049092 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 20:11:31.049107 | orchestrator | Saturday 12 July 2025 20:11:27 +0000 (0:00:00.156) 0:00:43.317 ********* 2025-07-12 20:11:31.049119 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:31.049147 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:31.049159 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.049171 | orchestrator | 2025-07-12 20:11:31.049183 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 20:11:31.049195 | orchestrator | Saturday 12 July 2025 20:11:28 +0000 (0:00:00.160) 0:00:43.478 ********* 2025-07-12 20:11:31.049207 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:31.049220 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:31.049232 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.049244 | orchestrator | 2025-07-12 20:11:31.049256 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 20:11:31.049269 | orchestrator | Saturday 12 July 2025 20:11:28 +0000 (0:00:00.147) 0:00:43.625 ********* 2025-07-12 20:11:31.049282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:31.049301 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:31.049312 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.049322 | orchestrator | 2025-07-12 20:11:31.049333 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 20:11:31.049343 | orchestrator | Saturday 12 July 2025 20:11:28 +0000 (0:00:00.154) 0:00:43.780 ********* 2025-07-12 20:11:31.049354 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:31.049382 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:31.049393 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.049403 | orchestrator | 2025-07-12 20:11:31.049414 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 20:11:31.049425 | orchestrator | Saturday 12 July 2025 20:11:28 +0000 (0:00:00.425) 0:00:44.205 ********* 2025-07-12 20:11:31.049435 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:31.049446 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:31.049456 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.049467 | orchestrator | 2025-07-12 20:11:31.049477 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 20:11:31.049488 | orchestrator | Saturday 12 July 2025 20:11:28 +0000 (0:00:00.152) 0:00:44.357 ********* 2025-07-12 20:11:31.049499 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:31.049509 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:31.049520 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.049530 | orchestrator | 2025-07-12 20:11:31.049541 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 20:11:31.049551 | orchestrator | Saturday 12 July 2025 20:11:29 +0000 (0:00:00.169) 0:00:44.526 ********* 2025-07-12 20:11:31.049562 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:31.049573 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:31.049583 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.049594 | orchestrator | 2025-07-12 20:11:31.049604 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 20:11:31.049615 | orchestrator | Saturday 12 July 2025 20:11:29 +0000 (0:00:00.162) 0:00:44.689 ********* 2025-07-12 20:11:31.049626 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:11:31.049637 | orchestrator | 2025-07-12 20:11:31.049648 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 20:11:31.049658 | orchestrator | Saturday 12 July 2025 20:11:29 +0000 (0:00:00.502) 0:00:45.192 ********* 2025-07-12 20:11:31.049669 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:11:31.049679 | orchestrator | 2025-07-12 20:11:31.049690 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 20:11:31.049700 | orchestrator | Saturday 12 July 2025 20:11:30 +0000 (0:00:00.507) 0:00:45.699 ********* 2025-07-12 20:11:31.049711 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:11:31.049722 | orchestrator | 2025-07-12 20:11:31.049738 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 20:11:31.049749 | orchestrator | Saturday 12 July 2025 20:11:30 +0000 (0:00:00.177) 0:00:45.876 ********* 2025-07-12 20:11:31.049801 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'vg_name': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'}) 2025-07-12 20:11:31.049814 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'vg_name': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'}) 2025-07-12 20:11:31.049825 | orchestrator | 2025-07-12 20:11:31.049835 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 20:11:31.049846 | orchestrator | Saturday 12 July 2025 20:11:30 +0000 (0:00:00.165) 0:00:46.041 ********* 2025-07-12 20:11:31.049857 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:31.049867 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:31.049878 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.049889 | orchestrator | 2025-07-12 20:11:31.049899 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 20:11:31.049910 | orchestrator | Saturday 12 July 2025 20:11:30 +0000 (0:00:00.169) 0:00:46.211 ********* 2025-07-12 20:11:31.049920 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:31.049931 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:31.049942 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:31.049952 | orchestrator | 2025-07-12 20:11:31.049963 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 20:11:31.049973 | orchestrator | Saturday 12 July 2025 20:11:30 +0000 (0:00:00.153) 0:00:46.365 ********* 2025-07-12 20:11:31.049991 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'})  2025-07-12 20:11:37.646936 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'})  2025-07-12 20:11:37.647107 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:11:37.647135 | orchestrator | 2025-07-12 20:11:37.647155 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 20:11:37.647175 | orchestrator | Saturday 12 July 2025 20:11:31 +0000 (0:00:00.142) 0:00:46.508 ********* 2025-07-12 20:11:37.647193 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 20:11:37.647211 | orchestrator |  "lvm_report": { 2025-07-12 20:11:37.647229 | orchestrator |  "lv": [ 2025-07-12 20:11:37.647246 | orchestrator |  { 2025-07-12 20:11:37.647263 | orchestrator |  "lv_name": "osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f", 2025-07-12 20:11:37.647281 | orchestrator |  "vg_name": "ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f" 2025-07-12 20:11:37.647299 | orchestrator |  }, 2025-07-12 20:11:37.647316 | orchestrator |  { 2025-07-12 20:11:37.647333 | orchestrator |  "lv_name": "osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3", 2025-07-12 20:11:37.647352 | orchestrator |  "vg_name": "ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3" 2025-07-12 20:11:37.647368 | orchestrator |  } 2025-07-12 20:11:37.647385 | orchestrator |  ], 2025-07-12 20:11:37.647402 | orchestrator |  "pv": [ 2025-07-12 20:11:37.647418 | orchestrator |  { 2025-07-12 20:11:37.647434 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 20:11:37.647454 | orchestrator |  "vg_name": "ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3" 2025-07-12 20:11:37.647507 | orchestrator |  }, 2025-07-12 20:11:37.647526 | orchestrator |  { 2025-07-12 20:11:37.647545 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 20:11:37.647564 | orchestrator |  "vg_name": "ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f" 2025-07-12 20:11:37.647584 | orchestrator |  } 2025-07-12 20:11:37.647602 | orchestrator |  ] 2025-07-12 20:11:37.647621 | orchestrator |  } 2025-07-12 20:11:37.647639 | orchestrator | } 2025-07-12 20:11:37.647658 | orchestrator | 2025-07-12 20:11:37.647677 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-07-12 20:11:37.647696 | orchestrator | 2025-07-12 20:11:37.647713 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 20:11:37.647731 | orchestrator | Saturday 12 July 2025 20:11:31 +0000 (0:00:00.530) 0:00:47.038 ********* 2025-07-12 20:11:37.647750 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-07-12 20:11:37.647768 | orchestrator | 2025-07-12 20:11:37.647786 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-07-12 20:11:37.647804 | orchestrator | Saturday 12 July 2025 20:11:31 +0000 (0:00:00.275) 0:00:47.314 ********* 2025-07-12 20:11:37.647821 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:11:37.647838 | orchestrator | 2025-07-12 20:11:37.647856 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.647875 | orchestrator | Saturday 12 July 2025 20:11:32 +0000 (0:00:00.238) 0:00:47.552 ********* 2025-07-12 20:11:37.647893 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-07-12 20:11:37.647912 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-07-12 20:11:37.647930 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-07-12 20:11:37.647947 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-07-12 20:11:37.647984 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-07-12 20:11:37.648001 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-07-12 20:11:37.648020 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-07-12 20:11:37.648038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-07-12 20:11:37.648056 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-07-12 20:11:37.648102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-07-12 20:11:37.648120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-07-12 20:11:37.648137 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-07-12 20:11:37.648155 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-07-12 20:11:37.648173 | orchestrator | 2025-07-12 20:11:37.648191 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.648209 | orchestrator | Saturday 12 July 2025 20:11:32 +0000 (0:00:00.481) 0:00:48.034 ********* 2025-07-12 20:11:37.648226 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:37.648245 | orchestrator | 2025-07-12 20:11:37.648263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.648282 | orchestrator | Saturday 12 July 2025 20:11:32 +0000 (0:00:00.198) 0:00:48.233 ********* 2025-07-12 20:11:37.648299 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:37.648317 | orchestrator | 2025-07-12 20:11:37.648334 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.648352 | orchestrator | Saturday 12 July 2025 20:11:32 +0000 (0:00:00.211) 0:00:48.444 ********* 2025-07-12 20:11:37.648368 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:37.648404 | orchestrator | 2025-07-12 20:11:37.648423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.648442 | orchestrator | Saturday 12 July 2025 20:11:33 +0000 (0:00:00.177) 0:00:48.622 ********* 2025-07-12 20:11:37.648460 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:37.648477 | orchestrator | 2025-07-12 20:11:37.648524 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.648543 | orchestrator | Saturday 12 July 2025 20:11:33 +0000 (0:00:00.198) 0:00:48.821 ********* 2025-07-12 20:11:37.648561 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:37.648585 | orchestrator | 2025-07-12 20:11:37.648604 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.648622 | orchestrator | Saturday 12 July 2025 20:11:33 +0000 (0:00:00.215) 0:00:49.037 ********* 2025-07-12 20:11:37.648640 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:37.648659 | orchestrator | 2025-07-12 20:11:37.648678 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.648696 | orchestrator | Saturday 12 July 2025 20:11:33 +0000 (0:00:00.209) 0:00:49.246 ********* 2025-07-12 20:11:37.648713 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:37.648730 | orchestrator | 2025-07-12 20:11:37.648748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.648767 | orchestrator | Saturday 12 July 2025 20:11:34 +0000 (0:00:00.663) 0:00:49.910 ********* 2025-07-12 20:11:37.648785 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:37.648803 | orchestrator | 2025-07-12 20:11:37.648820 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.648838 | orchestrator | Saturday 12 July 2025 20:11:34 +0000 (0:00:00.221) 0:00:50.131 ********* 2025-07-12 20:11:37.648856 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf) 2025-07-12 20:11:37.648876 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf) 2025-07-12 20:11:37.648894 | orchestrator | 2025-07-12 20:11:37.648911 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.648930 | orchestrator | Saturday 12 July 2025 20:11:35 +0000 (0:00:00.422) 0:00:50.553 ********* 2025-07-12 20:11:37.648948 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4) 2025-07-12 20:11:37.648966 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4) 2025-07-12 20:11:37.648984 | orchestrator | 2025-07-12 20:11:37.649002 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.649020 | orchestrator | Saturday 12 July 2025 20:11:35 +0000 (0:00:00.484) 0:00:51.038 ********* 2025-07-12 20:11:37.649039 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346) 2025-07-12 20:11:37.649057 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346) 2025-07-12 20:11:37.649103 | orchestrator | 2025-07-12 20:11:37.649121 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.649140 | orchestrator | Saturday 12 July 2025 20:11:36 +0000 (0:00:00.468) 0:00:51.507 ********* 2025-07-12 20:11:37.649158 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6) 2025-07-12 20:11:37.649177 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6) 2025-07-12 20:11:37.649195 | orchestrator | 2025-07-12 20:11:37.649213 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-07-12 20:11:37.649231 | orchestrator | Saturday 12 July 2025 20:11:36 +0000 (0:00:00.464) 0:00:51.971 ********* 2025-07-12 20:11:37.649260 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-07-12 20:11:37.649279 | orchestrator | 2025-07-12 20:11:37.649296 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:37.649328 | orchestrator | Saturday 12 July 2025 20:11:36 +0000 (0:00:00.443) 0:00:52.414 ********* 2025-07-12 20:11:37.649346 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-07-12 20:11:37.649364 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-07-12 20:11:37.649381 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-07-12 20:11:37.649397 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-07-12 20:11:37.649416 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-07-12 20:11:37.649434 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-07-12 20:11:37.649452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-07-12 20:11:37.649469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-07-12 20:11:37.649487 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-07-12 20:11:37.649502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-07-12 20:11:37.649519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-07-12 20:11:37.649535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-07-12 20:11:37.649551 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-07-12 20:11:37.649568 | orchestrator | 2025-07-12 20:11:37.649583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:37.649599 | orchestrator | Saturday 12 July 2025 20:11:37 +0000 (0:00:00.463) 0:00:52.878 ********* 2025-07-12 20:11:37.649616 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:37.649631 | orchestrator | 2025-07-12 20:11:37.649659 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.855804 | orchestrator | Saturday 12 July 2025 20:11:37 +0000 (0:00:00.225) 0:00:53.103 ********* 2025-07-12 20:11:46.855961 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.855993 | orchestrator | 2025-07-12 20:11:46.856013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856032 | orchestrator | Saturday 12 July 2025 20:11:37 +0000 (0:00:00.194) 0:00:53.298 ********* 2025-07-12 20:11:46.856049 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.856176 | orchestrator | 2025-07-12 20:11:46.856203 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856221 | orchestrator | Saturday 12 July 2025 20:11:38 +0000 (0:00:00.244) 0:00:53.542 ********* 2025-07-12 20:11:46.856237 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.856254 | orchestrator | 2025-07-12 20:11:46.856273 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856290 | orchestrator | Saturday 12 July 2025 20:11:38 +0000 (0:00:00.713) 0:00:54.256 ********* 2025-07-12 20:11:46.856308 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.856326 | orchestrator | 2025-07-12 20:11:46.856344 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856363 | orchestrator | Saturday 12 July 2025 20:11:39 +0000 (0:00:00.222) 0:00:54.479 ********* 2025-07-12 20:11:46.856382 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.856400 | orchestrator | 2025-07-12 20:11:46.856417 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856435 | orchestrator | Saturday 12 July 2025 20:11:39 +0000 (0:00:00.222) 0:00:54.702 ********* 2025-07-12 20:11:46.856454 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.856472 | orchestrator | 2025-07-12 20:11:46.856490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856555 | orchestrator | Saturday 12 July 2025 20:11:39 +0000 (0:00:00.206) 0:00:54.908 ********* 2025-07-12 20:11:46.856576 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.856594 | orchestrator | 2025-07-12 20:11:46.856609 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856625 | orchestrator | Saturday 12 July 2025 20:11:39 +0000 (0:00:00.218) 0:00:55.127 ********* 2025-07-12 20:11:46.856642 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-07-12 20:11:46.856659 | orchestrator | 2025-07-12 20:11:46.856676 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856694 | orchestrator | Saturday 12 July 2025 20:11:40 +0000 (0:00:00.347) 0:00:55.474 ********* 2025-07-12 20:11:46.856711 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.856728 | orchestrator | 2025-07-12 20:11:46.856746 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856764 | orchestrator | Saturday 12 July 2025 20:11:40 +0000 (0:00:00.220) 0:00:55.695 ********* 2025-07-12 20:11:46.856782 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.856800 | orchestrator | 2025-07-12 20:11:46.856817 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856836 | orchestrator | Saturday 12 July 2025 20:11:40 +0000 (0:00:00.220) 0:00:55.916 ********* 2025-07-12 20:11:46.856855 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.856874 | orchestrator | 2025-07-12 20:11:46.856892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-07-12 20:11:46.856910 | orchestrator | Saturday 12 July 2025 20:11:40 +0000 (0:00:00.233) 0:00:56.149 ********* 2025-07-12 20:11:46.856929 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.856990 | orchestrator | 2025-07-12 20:11:46.857010 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-07-12 20:11:46.857029 | orchestrator | Saturday 12 July 2025 20:11:40 +0000 (0:00:00.221) 0:00:56.371 ********* 2025-07-12 20:11:46.857049 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.857099 | orchestrator | 2025-07-12 20:11:46.857121 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-07-12 20:11:46.857140 | orchestrator | Saturday 12 July 2025 20:11:41 +0000 (0:00:00.171) 0:00:56.542 ********* 2025-07-12 20:11:46.857157 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '3d06229f-4e10-52c4-b396-8cb508609dff'}}) 2025-07-12 20:11:46.857176 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '81820e8a-af8a-5909-b466-981a4bed2414'}}) 2025-07-12 20:11:46.857193 | orchestrator | 2025-07-12 20:11:46.857211 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-07-12 20:11:46.857229 | orchestrator | Saturday 12 July 2025 20:11:41 +0000 (0:00:00.211) 0:00:56.754 ********* 2025-07-12 20:11:46.857250 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'}) 2025-07-12 20:11:46.857269 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'}) 2025-07-12 20:11:46.857287 | orchestrator | 2025-07-12 20:11:46.857304 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-07-12 20:11:46.857323 | orchestrator | Saturday 12 July 2025 20:11:43 +0000 (0:00:02.149) 0:00:58.903 ********* 2025-07-12 20:11:46.857342 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:46.857362 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:46.857380 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.857399 | orchestrator | 2025-07-12 20:11:46.857442 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-07-12 20:11:46.857500 | orchestrator | Saturday 12 July 2025 20:11:43 +0000 (0:00:00.189) 0:00:59.093 ********* 2025-07-12 20:11:46.857525 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'}) 2025-07-12 20:11:46.857544 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'}) 2025-07-12 20:11:46.857563 | orchestrator | 2025-07-12 20:11:46.857581 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-07-12 20:11:46.857600 | orchestrator | Saturday 12 July 2025 20:11:44 +0000 (0:00:01.290) 0:01:00.384 ********* 2025-07-12 20:11:46.857619 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:46.857638 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:46.857656 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.857674 | orchestrator | 2025-07-12 20:11:46.857693 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-07-12 20:11:46.857712 | orchestrator | Saturday 12 July 2025 20:11:45 +0000 (0:00:00.191) 0:01:00.575 ********* 2025-07-12 20:11:46.857731 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.857750 | orchestrator | 2025-07-12 20:11:46.857797 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-07-12 20:11:46.857819 | orchestrator | Saturday 12 July 2025 20:11:45 +0000 (0:00:00.150) 0:01:00.725 ********* 2025-07-12 20:11:46.857837 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:46.857857 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:46.857876 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.857894 | orchestrator | 2025-07-12 20:11:46.857913 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-07-12 20:11:46.857933 | orchestrator | Saturday 12 July 2025 20:11:45 +0000 (0:00:00.181) 0:01:00.907 ********* 2025-07-12 20:11:46.857953 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.857972 | orchestrator | 2025-07-12 20:11:46.857991 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-07-12 20:11:46.858010 | orchestrator | Saturday 12 July 2025 20:11:45 +0000 (0:00:00.192) 0:01:01.099 ********* 2025-07-12 20:11:46.858145 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:46.858167 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:46.858185 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.858203 | orchestrator | 2025-07-12 20:11:46.858232 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-07-12 20:11:46.858252 | orchestrator | Saturday 12 July 2025 20:11:45 +0000 (0:00:00.168) 0:01:01.268 ********* 2025-07-12 20:11:46.858269 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.858287 | orchestrator | 2025-07-12 20:11:46.858304 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-07-12 20:11:46.858322 | orchestrator | Saturday 12 July 2025 20:11:45 +0000 (0:00:00.142) 0:01:01.410 ********* 2025-07-12 20:11:46.858341 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:46.858377 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:46.858396 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.858414 | orchestrator | 2025-07-12 20:11:46.858432 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-07-12 20:11:46.858451 | orchestrator | Saturday 12 July 2025 20:11:46 +0000 (0:00:00.177) 0:01:01.588 ********* 2025-07-12 20:11:46.858470 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:11:46.858489 | orchestrator | 2025-07-12 20:11:46.858507 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-07-12 20:11:46.858526 | orchestrator | Saturday 12 July 2025 20:11:46 +0000 (0:00:00.169) 0:01:01.757 ********* 2025-07-12 20:11:46.858544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:46.858562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:46.858580 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:46.858599 | orchestrator | 2025-07-12 20:11:46.858617 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-07-12 20:11:46.858631 | orchestrator | Saturday 12 July 2025 20:11:46 +0000 (0:00:00.164) 0:01:01.922 ********* 2025-07-12 20:11:46.858663 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:53.193619 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:53.193818 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.193839 | orchestrator | 2025-07-12 20:11:53.193868 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-07-12 20:11:53.193893 | orchestrator | Saturday 12 July 2025 20:11:46 +0000 (0:00:00.394) 0:01:02.316 ********* 2025-07-12 20:11:53.193905 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:53.193916 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:53.193927 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.193938 | orchestrator | 2025-07-12 20:11:53.193949 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-07-12 20:11:53.193960 | orchestrator | Saturday 12 July 2025 20:11:47 +0000 (0:00:00.167) 0:01:02.484 ********* 2025-07-12 20:11:53.193971 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.193981 | orchestrator | 2025-07-12 20:11:53.193992 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-07-12 20:11:53.194003 | orchestrator | Saturday 12 July 2025 20:11:47 +0000 (0:00:00.152) 0:01:02.637 ********* 2025-07-12 20:11:53.194013 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.194133 | orchestrator | 2025-07-12 20:11:53.194144 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-07-12 20:11:53.194155 | orchestrator | Saturday 12 July 2025 20:11:47 +0000 (0:00:00.151) 0:01:02.788 ********* 2025-07-12 20:11:53.194166 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.194176 | orchestrator | 2025-07-12 20:11:53.194187 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-07-12 20:11:53.194198 | orchestrator | Saturday 12 July 2025 20:11:47 +0000 (0:00:00.151) 0:01:02.939 ********* 2025-07-12 20:11:53.194209 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 20:11:53.194220 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-07-12 20:11:53.194231 | orchestrator | } 2025-07-12 20:11:53.194242 | orchestrator | 2025-07-12 20:11:53.194279 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-07-12 20:11:53.194291 | orchestrator | Saturday 12 July 2025 20:11:47 +0000 (0:00:00.152) 0:01:03.092 ********* 2025-07-12 20:11:53.194301 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 20:11:53.194312 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-07-12 20:11:53.194322 | orchestrator | } 2025-07-12 20:11:53.194333 | orchestrator | 2025-07-12 20:11:53.194343 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-07-12 20:11:53.194354 | orchestrator | Saturday 12 July 2025 20:11:47 +0000 (0:00:00.154) 0:01:03.247 ********* 2025-07-12 20:11:53.194364 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 20:11:53.194375 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-07-12 20:11:53.194385 | orchestrator | } 2025-07-12 20:11:53.194396 | orchestrator | 2025-07-12 20:11:53.194406 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-07-12 20:11:53.194417 | orchestrator | Saturday 12 July 2025 20:11:47 +0000 (0:00:00.151) 0:01:03.399 ********* 2025-07-12 20:11:53.194427 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:11:53.194438 | orchestrator | 2025-07-12 20:11:53.194481 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-07-12 20:11:53.194493 | orchestrator | Saturday 12 July 2025 20:11:48 +0000 (0:00:00.481) 0:01:03.880 ********* 2025-07-12 20:11:53.194504 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:11:53.194514 | orchestrator | 2025-07-12 20:11:53.194525 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-07-12 20:11:53.194536 | orchestrator | Saturday 12 July 2025 20:11:48 +0000 (0:00:00.562) 0:01:04.443 ********* 2025-07-12 20:11:53.194546 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:11:53.194556 | orchestrator | 2025-07-12 20:11:53.194567 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-07-12 20:11:53.194578 | orchestrator | Saturday 12 July 2025 20:11:49 +0000 (0:00:00.520) 0:01:04.963 ********* 2025-07-12 20:11:53.194589 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:11:53.194599 | orchestrator | 2025-07-12 20:11:53.194610 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-07-12 20:11:53.194620 | orchestrator | Saturday 12 July 2025 20:11:49 +0000 (0:00:00.175) 0:01:05.139 ********* 2025-07-12 20:11:53.194631 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.194641 | orchestrator | 2025-07-12 20:11:53.194652 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-07-12 20:11:53.194662 | orchestrator | Saturday 12 July 2025 20:11:49 +0000 (0:00:00.313) 0:01:05.453 ********* 2025-07-12 20:11:53.194673 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.194683 | orchestrator | 2025-07-12 20:11:53.194694 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-07-12 20:11:53.194705 | orchestrator | Saturday 12 July 2025 20:11:50 +0000 (0:00:00.111) 0:01:05.564 ********* 2025-07-12 20:11:53.194715 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 20:11:53.194726 | orchestrator |  "vgs_report": { 2025-07-12 20:11:53.194737 | orchestrator |  "vg": [] 2025-07-12 20:11:53.194748 | orchestrator |  } 2025-07-12 20:11:53.194759 | orchestrator | } 2025-07-12 20:11:53.194769 | orchestrator | 2025-07-12 20:11:53.194780 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-07-12 20:11:53.194791 | orchestrator | Saturday 12 July 2025 20:11:50 +0000 (0:00:00.158) 0:01:05.723 ********* 2025-07-12 20:11:53.194802 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.194812 | orchestrator | 2025-07-12 20:11:53.194823 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-07-12 20:11:53.194833 | orchestrator | Saturday 12 July 2025 20:11:50 +0000 (0:00:00.144) 0:01:05.867 ********* 2025-07-12 20:11:53.194844 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.194854 | orchestrator | 2025-07-12 20:11:53.194866 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-07-12 20:11:53.194898 | orchestrator | Saturday 12 July 2025 20:11:50 +0000 (0:00:00.141) 0:01:06.009 ********* 2025-07-12 20:11:53.194918 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.194929 | orchestrator | 2025-07-12 20:11:53.194940 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-07-12 20:11:53.194950 | orchestrator | Saturday 12 July 2025 20:11:50 +0000 (0:00:00.144) 0:01:06.153 ********* 2025-07-12 20:11:53.194961 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.194971 | orchestrator | 2025-07-12 20:11:53.194982 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-07-12 20:11:53.194993 | orchestrator | Saturday 12 July 2025 20:11:50 +0000 (0:00:00.151) 0:01:06.305 ********* 2025-07-12 20:11:53.195003 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195013 | orchestrator | 2025-07-12 20:11:53.195024 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-07-12 20:11:53.195035 | orchestrator | Saturday 12 July 2025 20:11:50 +0000 (0:00:00.130) 0:01:06.436 ********* 2025-07-12 20:11:53.195045 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195056 | orchestrator | 2025-07-12 20:11:53.195066 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-07-12 20:11:53.195120 | orchestrator | Saturday 12 July 2025 20:11:51 +0000 (0:00:00.133) 0:01:06.569 ********* 2025-07-12 20:11:53.195131 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195142 | orchestrator | 2025-07-12 20:11:53.195152 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-07-12 20:11:53.195163 | orchestrator | Saturday 12 July 2025 20:11:51 +0000 (0:00:00.149) 0:01:06.719 ********* 2025-07-12 20:11:53.195174 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195184 | orchestrator | 2025-07-12 20:11:53.195195 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-07-12 20:11:53.195206 | orchestrator | Saturday 12 July 2025 20:11:51 +0000 (0:00:00.146) 0:01:06.865 ********* 2025-07-12 20:11:53.195216 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195227 | orchestrator | 2025-07-12 20:11:53.195295 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-07-12 20:11:53.195308 | orchestrator | Saturday 12 July 2025 20:11:51 +0000 (0:00:00.143) 0:01:07.008 ********* 2025-07-12 20:11:53.195318 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195329 | orchestrator | 2025-07-12 20:11:53.195339 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-07-12 20:11:53.195350 | orchestrator | Saturday 12 July 2025 20:11:51 +0000 (0:00:00.359) 0:01:07.368 ********* 2025-07-12 20:11:53.195361 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195371 | orchestrator | 2025-07-12 20:11:53.195382 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-07-12 20:11:53.195393 | orchestrator | Saturday 12 July 2025 20:11:52 +0000 (0:00:00.144) 0:01:07.513 ********* 2025-07-12 20:11:53.195403 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195414 | orchestrator | 2025-07-12 20:11:53.195424 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-07-12 20:11:53.195435 | orchestrator | Saturday 12 July 2025 20:11:52 +0000 (0:00:00.140) 0:01:07.654 ********* 2025-07-12 20:11:53.195445 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195456 | orchestrator | 2025-07-12 20:11:53.195467 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-07-12 20:11:53.195477 | orchestrator | Saturday 12 July 2025 20:11:52 +0000 (0:00:00.167) 0:01:07.821 ********* 2025-07-12 20:11:53.195488 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195498 | orchestrator | 2025-07-12 20:11:53.195516 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-07-12 20:11:53.195528 | orchestrator | Saturday 12 July 2025 20:11:52 +0000 (0:00:00.141) 0:01:07.962 ********* 2025-07-12 20:11:53.195539 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:53.195550 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:53.195569 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195579 | orchestrator | 2025-07-12 20:11:53.195590 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-07-12 20:11:53.195601 | orchestrator | Saturday 12 July 2025 20:11:52 +0000 (0:00:00.189) 0:01:08.152 ********* 2025-07-12 20:11:53.195611 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:53.195622 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:53.195633 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195643 | orchestrator | 2025-07-12 20:11:53.195654 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-07-12 20:11:53.195665 | orchestrator | Saturday 12 July 2025 20:11:52 +0000 (0:00:00.164) 0:01:08.317 ********* 2025-07-12 20:11:53.195675 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:53.195711 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:53.195723 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:53.195733 | orchestrator | 2025-07-12 20:11:53.195744 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-07-12 20:11:53.195755 | orchestrator | Saturday 12 July 2025 20:11:53 +0000 (0:00:00.174) 0:01:08.492 ********* 2025-07-12 20:11:53.195775 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:56.414534 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:56.414629 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:56.414644 | orchestrator | 2025-07-12 20:11:56.414656 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-07-12 20:11:56.414668 | orchestrator | Saturday 12 July 2025 20:11:53 +0000 (0:00:00.162) 0:01:08.654 ********* 2025-07-12 20:11:56.414679 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:56.414690 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:56.414700 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:56.414711 | orchestrator | 2025-07-12 20:11:56.414722 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-07-12 20:11:56.414733 | orchestrator | Saturday 12 July 2025 20:11:53 +0000 (0:00:00.163) 0:01:08.818 ********* 2025-07-12 20:11:56.414743 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:56.414754 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:56.414765 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:56.414775 | orchestrator | 2025-07-12 20:11:56.414786 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-07-12 20:11:56.414797 | orchestrator | Saturday 12 July 2025 20:11:53 +0000 (0:00:00.172) 0:01:08.991 ********* 2025-07-12 20:11:56.414807 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:56.414841 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:56.414853 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:56.414864 | orchestrator | 2025-07-12 20:11:56.414874 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-07-12 20:11:56.414885 | orchestrator | Saturday 12 July 2025 20:11:53 +0000 (0:00:00.176) 0:01:09.167 ********* 2025-07-12 20:11:56.414896 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:56.414907 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:56.414918 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:56.414928 | orchestrator | 2025-07-12 20:11:56.414939 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-07-12 20:11:56.414949 | orchestrator | Saturday 12 July 2025 20:11:54 +0000 (0:00:00.405) 0:01:09.573 ********* 2025-07-12 20:11:56.414960 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:11:56.414971 | orchestrator | 2025-07-12 20:11:56.414982 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-07-12 20:11:56.414992 | orchestrator | Saturday 12 July 2025 20:11:54 +0000 (0:00:00.570) 0:01:10.144 ********* 2025-07-12 20:11:56.415003 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:11:56.415013 | orchestrator | 2025-07-12 20:11:56.415024 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-07-12 20:11:56.415055 | orchestrator | Saturday 12 July 2025 20:11:55 +0000 (0:00:00.519) 0:01:10.663 ********* 2025-07-12 20:11:56.415067 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:11:56.415126 | orchestrator | 2025-07-12 20:11:56.415138 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-07-12 20:11:56.415151 | orchestrator | Saturday 12 July 2025 20:11:55 +0000 (0:00:00.150) 0:01:10.813 ********* 2025-07-12 20:11:56.415163 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'vg_name': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'}) 2025-07-12 20:11:56.415177 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'vg_name': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'}) 2025-07-12 20:11:56.415189 | orchestrator | 2025-07-12 20:11:56.415201 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-07-12 20:11:56.415213 | orchestrator | Saturday 12 July 2025 20:11:55 +0000 (0:00:00.200) 0:01:11.013 ********* 2025-07-12 20:11:56.415225 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:56.415237 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:56.415249 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:56.415261 | orchestrator | 2025-07-12 20:11:56.415273 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-07-12 20:11:56.415286 | orchestrator | Saturday 12 July 2025 20:11:55 +0000 (0:00:00.157) 0:01:11.171 ********* 2025-07-12 20:11:56.415314 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:56.415328 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:56.415340 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:56.415352 | orchestrator | 2025-07-12 20:11:56.415363 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-07-12 20:11:56.415383 | orchestrator | Saturday 12 July 2025 20:11:56 +0000 (0:00:00.373) 0:01:11.545 ********* 2025-07-12 20:11:56.415395 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'})  2025-07-12 20:11:56.415408 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'})  2025-07-12 20:11:56.415420 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:11:56.415433 | orchestrator | 2025-07-12 20:11:56.415443 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-07-12 20:11:56.415454 | orchestrator | Saturday 12 July 2025 20:11:56 +0000 (0:00:00.167) 0:01:11.713 ********* 2025-07-12 20:11:56.415464 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 20:11:56.415475 | orchestrator |  "lvm_report": { 2025-07-12 20:11:56.415485 | orchestrator |  "lv": [ 2025-07-12 20:11:56.415496 | orchestrator |  { 2025-07-12 20:11:56.415506 | orchestrator |  "lv_name": "osd-block-3d06229f-4e10-52c4-b396-8cb508609dff", 2025-07-12 20:11:56.415518 | orchestrator |  "vg_name": "ceph-3d06229f-4e10-52c4-b396-8cb508609dff" 2025-07-12 20:11:56.415528 | orchestrator |  }, 2025-07-12 20:11:56.415539 | orchestrator |  { 2025-07-12 20:11:56.415549 | orchestrator |  "lv_name": "osd-block-81820e8a-af8a-5909-b466-981a4bed2414", 2025-07-12 20:11:56.415575 | orchestrator |  "vg_name": "ceph-81820e8a-af8a-5909-b466-981a4bed2414" 2025-07-12 20:11:56.415587 | orchestrator |  } 2025-07-12 20:11:56.415598 | orchestrator |  ], 2025-07-12 20:11:56.415608 | orchestrator |  "pv": [ 2025-07-12 20:11:56.415619 | orchestrator |  { 2025-07-12 20:11:56.415630 | orchestrator |  "pv_name": "/dev/sdb", 2025-07-12 20:11:56.415640 | orchestrator |  "vg_name": "ceph-3d06229f-4e10-52c4-b396-8cb508609dff" 2025-07-12 20:11:56.415651 | orchestrator |  }, 2025-07-12 20:11:56.415679 | orchestrator |  { 2025-07-12 20:11:56.415700 | orchestrator |  "pv_name": "/dev/sdc", 2025-07-12 20:11:56.415711 | orchestrator |  "vg_name": "ceph-81820e8a-af8a-5909-b466-981a4bed2414" 2025-07-12 20:11:56.415722 | orchestrator |  } 2025-07-12 20:11:56.415732 | orchestrator |  ] 2025-07-12 20:11:56.415743 | orchestrator |  } 2025-07-12 20:11:56.415754 | orchestrator | } 2025-07-12 20:11:56.415765 | orchestrator | 2025-07-12 20:11:56.415775 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:11:56.415791 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 20:11:56.415802 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 20:11:56.415813 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-07-12 20:11:56.415824 | orchestrator | 2025-07-12 20:11:56.415834 | orchestrator | 2025-07-12 20:11:56.415845 | orchestrator | 2025-07-12 20:11:56.415855 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:11:56.415866 | orchestrator | Saturday 12 July 2025 20:11:56 +0000 (0:00:00.135) 0:01:11.849 ********* 2025-07-12 20:11:56.415877 | orchestrator | =============================================================================== 2025-07-12 20:11:56.415887 | orchestrator | Create block VGs -------------------------------------------------------- 5.84s 2025-07-12 20:11:56.415898 | orchestrator | Create block LVs -------------------------------------------------------- 4.08s 2025-07-12 20:11:56.415908 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.70s 2025-07-12 20:11:56.415919 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.60s 2025-07-12 20:11:56.415936 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.57s 2025-07-12 20:11:56.415947 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.55s 2025-07-12 20:11:56.415957 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.53s 2025-07-12 20:11:56.415968 | orchestrator | Add known partitions to the list of available block devices ------------- 1.33s 2025-07-12 20:11:56.415978 | orchestrator | Add known links to the list of available block devices ------------------ 1.24s 2025-07-12 20:11:56.415989 | orchestrator | Print LVM report data --------------------------------------------------- 0.93s 2025-07-12 20:11:56.415999 | orchestrator | Fail if DB LV defined in lvm_volumes is missing ------------------------- 0.82s 2025-07-12 20:11:56.416010 | orchestrator | Create WAL LVs for ceph_db_wal_devices ---------------------------------- 0.74s 2025-07-12 20:11:56.416020 | orchestrator | Print 'Create DB LVs for ceph_db_wal_devices' --------------------------- 0.73s 2025-07-12 20:11:56.416031 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.72s 2025-07-12 20:11:56.416041 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-07-12 20:11:56.416059 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2025-07-12 20:11:56.675225 | orchestrator | Add known partitions to the list of available block devices ------------- 0.68s 2025-07-12 20:11:56.675322 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2025-07-12 20:11:56.675337 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-07-12 20:11:56.675349 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.66s 2025-07-12 20:12:08.711754 | orchestrator | 2025-07-12 20:12:08 | INFO  | Task 6ccb62d6-a477-4b68-8135-8e4df362ccd8 (facts) was prepared for execution. 2025-07-12 20:12:08.711850 | orchestrator | 2025-07-12 20:12:08 | INFO  | It takes a moment until task 6ccb62d6-a477-4b68-8135-8e4df362ccd8 (facts) has been started and output is visible here. 2025-07-12 20:12:20.558130 | orchestrator | 2025-07-12 20:12:20.558252 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 20:12:20.558269 | orchestrator | 2025-07-12 20:12:20.558282 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 20:12:20.558294 | orchestrator | Saturday 12 July 2025 20:12:12 +0000 (0:00:00.255) 0:00:00.255 ********* 2025-07-12 20:12:20.558305 | orchestrator | ok: [testbed-manager] 2025-07-12 20:12:20.558317 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:12:20.558328 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:12:20.558338 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:12:20.558349 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:12:20.558360 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:12:20.558370 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:12:20.558381 | orchestrator | 2025-07-12 20:12:20.558392 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 20:12:20.558403 | orchestrator | Saturday 12 July 2025 20:12:13 +0000 (0:00:01.060) 0:00:01.315 ********* 2025-07-12 20:12:20.558413 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:20.558425 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:20.558436 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:20.558446 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:20.558457 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:20.558468 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:20.558478 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:20.558489 | orchestrator | 2025-07-12 20:12:20.558500 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 20:12:20.558510 | orchestrator | 2025-07-12 20:12:20.558521 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 20:12:20.558532 | orchestrator | Saturday 12 July 2025 20:12:14 +0000 (0:00:01.242) 0:00:02.557 ********* 2025-07-12 20:12:20.558543 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:12:20.558582 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:12:20.558595 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:12:20.558607 | orchestrator | ok: [testbed-manager] 2025-07-12 20:12:20.558619 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:12:20.558631 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:12:20.558642 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:12:20.558654 | orchestrator | 2025-07-12 20:12:20.558665 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 20:12:20.558678 | orchestrator | 2025-07-12 20:12:20.558690 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 20:12:20.558717 | orchestrator | Saturday 12 July 2025 20:12:19 +0000 (0:00:04.441) 0:00:06.998 ********* 2025-07-12 20:12:20.558730 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:20.558741 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:12:20.558751 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:12:20.558762 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:12:20.558772 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:12:20.558783 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:12:20.558793 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:12:20.558804 | orchestrator | 2025-07-12 20:12:20.558814 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:12:20.558825 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:12:20.558837 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:12:20.558848 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:12:20.558858 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:12:20.558869 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:12:20.558880 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:12:20.558890 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:12:20.558901 | orchestrator | 2025-07-12 20:12:20.558911 | orchestrator | 2025-07-12 20:12:20.558922 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:12:20.558933 | orchestrator | Saturday 12 July 2025 20:12:20 +0000 (0:00:00.755) 0:00:07.754 ********* 2025-07-12 20:12:20.558943 | orchestrator | =============================================================================== 2025-07-12 20:12:20.558954 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.44s 2025-07-12 20:12:20.558964 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.24s 2025-07-12 20:12:20.558975 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.06s 2025-07-12 20:12:20.558986 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.76s 2025-07-12 20:12:33.152694 | orchestrator | 2025-07-12 20:12:33 | INFO  | Task 1d20116f-445e-4c8a-86d9-ccfa4643357d (frr) was prepared for execution. 2025-07-12 20:12:33.152812 | orchestrator | 2025-07-12 20:12:33 | INFO  | It takes a moment until task 1d20116f-445e-4c8a-86d9-ccfa4643357d (frr) has been started and output is visible here. 2025-07-12 20:12:58.744591 | orchestrator | 2025-07-12 20:12:58.744689 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-07-12 20:12:58.744703 | orchestrator | 2025-07-12 20:12:58.744712 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-07-12 20:12:58.744744 | orchestrator | Saturday 12 July 2025 20:12:37 +0000 (0:00:00.263) 0:00:00.263 ********* 2025-07-12 20:12:58.744753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 20:12:58.744764 | orchestrator | 2025-07-12 20:12:58.744769 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-07-12 20:12:58.744774 | orchestrator | Saturday 12 July 2025 20:12:37 +0000 (0:00:00.246) 0:00:00.510 ********* 2025-07-12 20:12:58.744779 | orchestrator | changed: [testbed-manager] 2025-07-12 20:12:58.744784 | orchestrator | 2025-07-12 20:12:58.744789 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-07-12 20:12:58.744794 | orchestrator | Saturday 12 July 2025 20:12:39 +0000 (0:00:01.213) 0:00:01.723 ********* 2025-07-12 20:12:58.744799 | orchestrator | changed: [testbed-manager] 2025-07-12 20:12:58.744804 | orchestrator | 2025-07-12 20:12:58.744808 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-07-12 20:12:58.744813 | orchestrator | Saturday 12 July 2025 20:12:49 +0000 (0:00:09.853) 0:00:11.577 ********* 2025-07-12 20:12:58.744818 | orchestrator | ok: [testbed-manager] 2025-07-12 20:12:58.744824 | orchestrator | 2025-07-12 20:12:58.744828 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-07-12 20:12:58.744833 | orchestrator | Saturday 12 July 2025 20:12:49 +0000 (0:00:00.989) 0:00:12.566 ********* 2025-07-12 20:12:58.744838 | orchestrator | changed: [testbed-manager] 2025-07-12 20:12:58.744843 | orchestrator | 2025-07-12 20:12:58.744848 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-07-12 20:12:58.744853 | orchestrator | Saturday 12 July 2025 20:12:50 +0000 (0:00:00.844) 0:00:13.410 ********* 2025-07-12 20:12:58.744858 | orchestrator | ok: [testbed-manager] 2025-07-12 20:12:58.744862 | orchestrator | 2025-07-12 20:12:58.744867 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-07-12 20:12:58.744873 | orchestrator | Saturday 12 July 2025 20:12:51 +0000 (0:00:01.109) 0:00:14.520 ********* 2025-07-12 20:12:58.744878 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:12:58.744882 | orchestrator | 2025-07-12 20:12:58.744887 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-07-12 20:12:58.744902 | orchestrator | Saturday 12 July 2025 20:12:52 +0000 (0:00:00.808) 0:00:15.328 ********* 2025-07-12 20:12:58.744907 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:12:58.744912 | orchestrator | 2025-07-12 20:12:58.744916 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-07-12 20:12:58.744921 | orchestrator | Saturday 12 July 2025 20:12:52 +0000 (0:00:00.135) 0:00:15.464 ********* 2025-07-12 20:12:58.744926 | orchestrator | changed: [testbed-manager] 2025-07-12 20:12:58.744930 | orchestrator | 2025-07-12 20:12:58.744935 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-07-12 20:12:58.744940 | orchestrator | Saturday 12 July 2025 20:12:53 +0000 (0:00:00.899) 0:00:16.364 ********* 2025-07-12 20:12:58.744945 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-07-12 20:12:58.744949 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-07-12 20:12:58.744956 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-07-12 20:12:58.744960 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-07-12 20:12:58.744969 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-07-12 20:12:58.744976 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-07-12 20:12:58.744984 | orchestrator | 2025-07-12 20:12:58.744992 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-07-12 20:12:58.745000 | orchestrator | Saturday 12 July 2025 20:12:55 +0000 (0:00:01.997) 0:00:18.361 ********* 2025-07-12 20:12:58.745015 | orchestrator | ok: [testbed-manager] 2025-07-12 20:12:58.745023 | orchestrator | 2025-07-12 20:12:58.745031 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-07-12 20:12:58.745039 | orchestrator | Saturday 12 July 2025 20:12:56 +0000 (0:00:01.164) 0:00:19.525 ********* 2025-07-12 20:12:58.745048 | orchestrator | changed: [testbed-manager] 2025-07-12 20:12:58.745056 | orchestrator | 2025-07-12 20:12:58.745064 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:12:58.745091 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:12:58.745100 | orchestrator | 2025-07-12 20:12:58.745108 | orchestrator | 2025-07-12 20:12:58.745116 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:12:58.745125 | orchestrator | Saturday 12 July 2025 20:12:58 +0000 (0:00:01.363) 0:00:20.889 ********* 2025-07-12 20:12:58.745132 | orchestrator | =============================================================================== 2025-07-12 20:12:58.745140 | orchestrator | osism.services.frr : Install frr package -------------------------------- 9.85s 2025-07-12 20:12:58.745148 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.00s 2025-07-12 20:12:58.745156 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.36s 2025-07-12 20:12:58.745165 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.21s 2025-07-12 20:12:58.745189 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.16s 2025-07-12 20:12:58.745198 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.11s 2025-07-12 20:12:58.745206 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.99s 2025-07-12 20:12:58.745214 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.90s 2025-07-12 20:12:58.745223 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.84s 2025-07-12 20:12:58.745232 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.81s 2025-07-12 20:12:58.745240 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.25s 2025-07-12 20:12:58.745249 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.14s 2025-07-12 20:12:59.051174 | orchestrator | 2025-07-12 20:12:59.053392 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Sat Jul 12 20:12:59 UTC 2025 2025-07-12 20:12:59.053431 | orchestrator | 2025-07-12 20:13:01.122967 | orchestrator | 2025-07-12 20:13:01 | INFO  | Collection nutshell is prepared for execution 2025-07-12 20:13:01.123164 | orchestrator | 2025-07-12 20:13:01 | INFO  | D [0] - dotfiles 2025-07-12 20:13:11.128376 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [0] - homer 2025-07-12 20:13:11.128549 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [0] - netdata 2025-07-12 20:13:11.128575 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [0] - openstackclient 2025-07-12 20:13:11.128593 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [0] - phpmyadmin 2025-07-12 20:13:11.128611 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [0] - common 2025-07-12 20:13:11.131947 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [1] -- loadbalancer 2025-07-12 20:13:11.132161 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [2] --- opensearch 2025-07-12 20:13:11.132181 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [2] --- mariadb-ng 2025-07-12 20:13:11.132191 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [3] ---- horizon 2025-07-12 20:13:11.132200 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [3] ---- keystone 2025-07-12 20:13:11.132209 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [4] ----- neutron 2025-07-12 20:13:11.132235 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [5] ------ wait-for-nova 2025-07-12 20:13:11.132280 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [5] ------ octavia 2025-07-12 20:13:11.133406 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [4] ----- barbican 2025-07-12 20:13:11.133507 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [4] ----- designate 2025-07-12 20:13:11.133519 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [4] ----- ironic 2025-07-12 20:13:11.133533 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [4] ----- placement 2025-07-12 20:13:11.133542 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [4] ----- magnum 2025-07-12 20:13:11.133906 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [1] -- openvswitch 2025-07-12 20:13:11.134113 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [2] --- ovn 2025-07-12 20:13:11.134226 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [1] -- memcached 2025-07-12 20:13:11.134240 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [1] -- redis 2025-07-12 20:13:11.134395 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [1] -- rabbitmq-ng 2025-07-12 20:13:11.134765 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [0] - kubernetes 2025-07-12 20:13:11.136969 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [1] -- kubeconfig 2025-07-12 20:13:11.137056 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [1] -- copy-kubeconfig 2025-07-12 20:13:11.137104 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [0] - ceph 2025-07-12 20:13:11.139494 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [1] -- ceph-pools 2025-07-12 20:13:11.139599 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [2] --- copy-ceph-keys 2025-07-12 20:13:11.139609 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [3] ---- cephclient 2025-07-12 20:13:11.139625 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-07-12 20:13:11.139696 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [4] ----- wait-for-keystone 2025-07-12 20:13:11.139707 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [5] ------ kolla-ceph-rgw 2025-07-12 20:13:11.139720 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [5] ------ glance 2025-07-12 20:13:11.139729 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [5] ------ cinder 2025-07-12 20:13:11.139739 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [5] ------ nova 2025-07-12 20:13:11.140195 | orchestrator | 2025-07-12 20:13:11 | INFO  | A [4] ----- prometheus 2025-07-12 20:13:11.140215 | orchestrator | 2025-07-12 20:13:11 | INFO  | D [5] ------ grafana 2025-07-12 20:13:11.333459 | orchestrator | 2025-07-12 20:13:11 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-07-12 20:13:11.333587 | orchestrator | 2025-07-12 20:13:11 | INFO  | Tasks are running in the background 2025-07-12 20:13:14.313660 | orchestrator | 2025-07-12 20:13:14 | INFO  | No task IDs specified, wait for all currently running tasks 2025-07-12 20:13:16.474477 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:16.474590 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:16.478454 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:16.479123 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:16.479714 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:16.482762 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:16.483503 | orchestrator | 2025-07-12 20:13:16 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:16.483543 | orchestrator | 2025-07-12 20:13:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:19.552962 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:19.553047 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:19.553890 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:19.554939 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:19.556007 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:19.556886 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:19.560388 | orchestrator | 2025-07-12 20:13:19 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:19.560431 | orchestrator | 2025-07-12 20:13:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:22.629543 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:22.634462 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:22.643771 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:22.643837 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:22.644454 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:22.645753 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:22.645778 | orchestrator | 2025-07-12 20:13:22 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:22.645784 | orchestrator | 2025-07-12 20:13:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:25.712903 | orchestrator | 2025-07-12 20:13:25 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:25.712991 | orchestrator | 2025-07-12 20:13:25 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:25.719678 | orchestrator | 2025-07-12 20:13:25 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:25.719889 | orchestrator | 2025-07-12 20:13:25 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:25.722400 | orchestrator | 2025-07-12 20:13:25 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:25.722456 | orchestrator | 2025-07-12 20:13:25 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:25.723039 | orchestrator | 2025-07-12 20:13:25 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:25.723057 | orchestrator | 2025-07-12 20:13:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:28.792485 | orchestrator | 2025-07-12 20:13:28 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:28.792537 | orchestrator | 2025-07-12 20:13:28 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:28.801521 | orchestrator | 2025-07-12 20:13:28 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:28.801570 | orchestrator | 2025-07-12 20:13:28 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:28.801575 | orchestrator | 2025-07-12 20:13:28 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:28.806409 | orchestrator | 2025-07-12 20:13:28 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:28.806445 | orchestrator | 2025-07-12 20:13:28 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:28.806451 | orchestrator | 2025-07-12 20:13:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:31.869370 | orchestrator | 2025-07-12 20:13:31 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:31.876309 | orchestrator | 2025-07-12 20:13:31 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:31.880995 | orchestrator | 2025-07-12 20:13:31 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:31.887905 | orchestrator | 2025-07-12 20:13:31 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:31.887965 | orchestrator | 2025-07-12 20:13:31 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:31.895058 | orchestrator | 2025-07-12 20:13:31 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:31.897157 | orchestrator | 2025-07-12 20:13:31 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:31.897217 | orchestrator | 2025-07-12 20:13:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:34.945811 | orchestrator | 2025-07-12 20:13:34 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:34.946731 | orchestrator | 2025-07-12 20:13:34 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:34.947936 | orchestrator | 2025-07-12 20:13:34 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:34.950405 | orchestrator | 2025-07-12 20:13:34 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:34.954211 | orchestrator | 2025-07-12 20:13:34 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:34.954828 | orchestrator | 2025-07-12 20:13:34 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:34.958315 | orchestrator | 2025-07-12 20:13:34 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:34.958364 | orchestrator | 2025-07-12 20:13:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:38.037903 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:38.039156 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:38.043549 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:38.046981 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:38.048411 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:38.066640 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:38.066745 | orchestrator | 2025-07-12 20:13:38 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:38.066756 | orchestrator | 2025-07-12 20:13:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:41.156516 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:41.160021 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:41.160159 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:41.167705 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:41.167783 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:41.171558 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:41.175847 | orchestrator | 2025-07-12 20:13:41 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:41.175886 | orchestrator | 2025-07-12 20:13:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:44.248418 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:44.249161 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:44.253150 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:44.255235 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:44.259587 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:44.261921 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:44.267008 | orchestrator | 2025-07-12 20:13:44 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:44.267056 | orchestrator | 2025-07-12 20:13:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:47.351095 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:47.352827 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:47.354908 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:47.355495 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:47.359002 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:47.359043 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:47.361906 | orchestrator | 2025-07-12 20:13:47 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:47.361963 | orchestrator | 2025-07-12 20:13:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:50.429565 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:50.432382 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:50.436335 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state STARTED 2025-07-12 20:13:50.436834 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:50.441474 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:50.441564 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:50.442746 | orchestrator | 2025-07-12 20:13:50 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:50.442822 | orchestrator | 2025-07-12 20:13:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:53.533931 | orchestrator | 2025-07-12 20:13:53.534110 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-07-12 20:13:53.534131 | orchestrator | 2025-07-12 20:13:53.534143 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-07-12 20:13:53.534155 | orchestrator | Saturday 12 July 2025 20:13:30 +0000 (0:00:00.962) 0:00:00.962 ********* 2025-07-12 20:13:53.534166 | orchestrator | changed: [testbed-manager] 2025-07-12 20:13:53.534178 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:13:53.534189 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:13:53.534200 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:13:53.534210 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:13:53.534221 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:13:53.534232 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:13:53.534242 | orchestrator | 2025-07-12 20:13:53.534253 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-07-12 20:13:53.534264 | orchestrator | Saturday 12 July 2025 20:13:35 +0000 (0:00:05.358) 0:00:06.320 ********* 2025-07-12 20:13:53.534275 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-12 20:13:53.534286 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 20:13:53.534297 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 20:13:53.534307 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 20:13:53.534318 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 20:13:53.534329 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 20:13:53.534339 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 20:13:53.534350 | orchestrator | 2025-07-12 20:13:53.534361 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-07-12 20:13:53.534372 | orchestrator | Saturday 12 July 2025 20:13:39 +0000 (0:00:04.354) 0:00:10.675 ********* 2025-07-12 20:13:53.534386 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 20:13:36.873707', 'end': '2025-07-12 20:13:36.881777', 'delta': '0:00:00.008070', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 20:13:53.534414 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 20:13:36.941108', 'end': '2025-07-12 20:13:36.949835', 'delta': '0:00:00.008727', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 20:13:53.534449 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 20:13:37.247218', 'end': '2025-07-12 20:13:37.254618', 'delta': '0:00:00.007400', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 20:13:53.534487 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 20:13:37.766399', 'end': '2025-07-12 20:13:37.775584', 'delta': '0:00:00.009185', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 20:13:53.534500 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 20:13:38.603291', 'end': '2025-07-12 20:13:38.611119', 'delta': '0:00:00.007828', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 20:13:53.534511 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 20:13:39.088923', 'end': '2025-07-12 20:13:39.097163', 'delta': '0:00:00.008240', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 20:13:53.534528 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-07-12 20:13:39.447711', 'end': '2025-07-12 20:13:39.458492', 'delta': '0:00:00.010781', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-07-12 20:13:53.534548 | orchestrator | 2025-07-12 20:13:53.534559 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-07-12 20:13:53.534570 | orchestrator | Saturday 12 July 2025 20:13:43 +0000 (0:00:04.004) 0:00:14.679 ********* 2025-07-12 20:13:53.534581 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-07-12 20:13:53.534592 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 20:13:53.534603 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 20:13:53.534614 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 20:13:53.534624 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 20:13:53.534635 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 20:13:53.534646 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 20:13:53.534656 | orchestrator | 2025-07-12 20:13:53.534667 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-07-12 20:13:53.534677 | orchestrator | Saturday 12 July 2025 20:13:46 +0000 (0:00:02.267) 0:00:16.947 ********* 2025-07-12 20:13:53.534688 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-07-12 20:13:53.534699 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-07-12 20:13:53.534709 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-07-12 20:13:53.534720 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-07-12 20:13:53.534730 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-07-12 20:13:53.534741 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-07-12 20:13:53.534751 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-07-12 20:13:53.534762 | orchestrator | 2025-07-12 20:13:53.534773 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:13:53.534791 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:13:53.534803 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:13:53.534814 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:13:53.534826 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:13:53.534836 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:13:53.534847 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:13:53.534858 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:13:53.534868 | orchestrator | 2025-07-12 20:13:53.534879 | orchestrator | 2025-07-12 20:13:53.534890 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:13:53.534900 | orchestrator | Saturday 12 July 2025 20:13:49 +0000 (0:00:03.860) 0:00:20.808 ********* 2025-07-12 20:13:53.534911 | orchestrator | =============================================================================== 2025-07-12 20:13:53.534929 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 5.36s 2025-07-12 20:13:53.534940 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 4.35s 2025-07-12 20:13:53.534950 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 4.01s 2025-07-12 20:13:53.534961 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.86s 2025-07-12 20:13:53.534972 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.27s 2025-07-12 20:13:53.534982 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:53.534993 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:53.535004 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task caab18d3-ce7e-4104-98ae-d6dd4a45ae3b is in state SUCCESS 2025-07-12 20:13:53.535015 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:53.535026 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:53.535036 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:53.535047 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:13:53.535057 | orchestrator | 2025-07-12 20:13:53 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:53.535085 | orchestrator | 2025-07-12 20:13:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:56.582899 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:56.583127 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:56.583871 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:56.584359 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:56.584883 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:56.588537 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:13:56.591829 | orchestrator | 2025-07-12 20:13:56 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:56.591895 | orchestrator | 2025-07-12 20:13:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:13:59.635318 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:13:59.637994 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:13:59.638343 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:13:59.638997 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:13:59.643927 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:13:59.643981 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:13:59.644683 | orchestrator | 2025-07-12 20:13:59 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:13:59.644755 | orchestrator | 2025-07-12 20:13:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:02.692012 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:02.693719 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:14:02.697306 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:02.698300 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:02.700789 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:14:02.701111 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:02.704712 | orchestrator | 2025-07-12 20:14:02 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:02.704770 | orchestrator | 2025-07-12 20:14:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:05.754369 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:05.754484 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state STARTED 2025-07-12 20:14:05.755377 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:05.764358 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:05.767258 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:14:05.767287 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:05.771444 | orchestrator | 2025-07-12 20:14:05 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:05.771532 | orchestrator | 2025-07-12 20:14:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:08.831475 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:08.832135 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task d83cb37b-a0c1-437d-9eb6-bdaf00f3a195 is in state SUCCESS 2025-07-12 20:14:08.837640 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:08.839971 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:08.849708 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:14:08.851612 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:08.853250 | orchestrator | 2025-07-12 20:14:08 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:08.853273 | orchestrator | 2025-07-12 20:14:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:11.905316 | orchestrator | 2025-07-12 20:14:11 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:11.906560 | orchestrator | 2025-07-12 20:14:11 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:11.907903 | orchestrator | 2025-07-12 20:14:11 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:11.909378 | orchestrator | 2025-07-12 20:14:11 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:14:11.909681 | orchestrator | 2025-07-12 20:14:11 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:11.910456 | orchestrator | 2025-07-12 20:14:11 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:11.910652 | orchestrator | 2025-07-12 20:14:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:15.000250 | orchestrator | 2025-07-12 20:14:14 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:15.001528 | orchestrator | 2025-07-12 20:14:14 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:15.005330 | orchestrator | 2025-07-12 20:14:15 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:15.006948 | orchestrator | 2025-07-12 20:14:15 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:14:15.008999 | orchestrator | 2025-07-12 20:14:15 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:15.011413 | orchestrator | 2025-07-12 20:14:15 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:15.011999 | orchestrator | 2025-07-12 20:14:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:18.120264 | orchestrator | 2025-07-12 20:14:18 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:18.135691 | orchestrator | 2025-07-12 20:14:18 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:18.139419 | orchestrator | 2025-07-12 20:14:18 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:18.145812 | orchestrator | 2025-07-12 20:14:18 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:14:18.150327 | orchestrator | 2025-07-12 20:14:18 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:18.157982 | orchestrator | 2025-07-12 20:14:18 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:18.158224 | orchestrator | 2025-07-12 20:14:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:21.260271 | orchestrator | 2025-07-12 20:14:21 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:21.260673 | orchestrator | 2025-07-12 20:14:21 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:21.264285 | orchestrator | 2025-07-12 20:14:21 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:21.265233 | orchestrator | 2025-07-12 20:14:21 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state STARTED 2025-07-12 20:14:21.265260 | orchestrator | 2025-07-12 20:14:21 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:21.265677 | orchestrator | 2025-07-12 20:14:21 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:21.265699 | orchestrator | 2025-07-12 20:14:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:24.305139 | orchestrator | 2025-07-12 20:14:24 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:24.307149 | orchestrator | 2025-07-12 20:14:24 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:24.310667 | orchestrator | 2025-07-12 20:14:24 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:24.311855 | orchestrator | 2025-07-12 20:14:24 | INFO  | Task 26717171-fdf1-4eb8-842e-0a77aa98a383 is in state SUCCESS 2025-07-12 20:14:24.313803 | orchestrator | 2025-07-12 20:14:24 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:24.320588 | orchestrator | 2025-07-12 20:14:24 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:24.320652 | orchestrator | 2025-07-12 20:14:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:27.378250 | orchestrator | 2025-07-12 20:14:27 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:27.380522 | orchestrator | 2025-07-12 20:14:27 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:27.382769 | orchestrator | 2025-07-12 20:14:27 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:27.384202 | orchestrator | 2025-07-12 20:14:27 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:27.386007 | orchestrator | 2025-07-12 20:14:27 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:27.386130 | orchestrator | 2025-07-12 20:14:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:30.488958 | orchestrator | 2025-07-12 20:14:30 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:30.489014 | orchestrator | 2025-07-12 20:14:30 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:30.489030 | orchestrator | 2025-07-12 20:14:30 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:30.490148 | orchestrator | 2025-07-12 20:14:30 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:30.490831 | orchestrator | 2025-07-12 20:14:30 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:30.492016 | orchestrator | 2025-07-12 20:14:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:33.546429 | orchestrator | 2025-07-12 20:14:33 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:33.548685 | orchestrator | 2025-07-12 20:14:33 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:33.549902 | orchestrator | 2025-07-12 20:14:33 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:33.552879 | orchestrator | 2025-07-12 20:14:33 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:33.554945 | orchestrator | 2025-07-12 20:14:33 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:33.555107 | orchestrator | 2025-07-12 20:14:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:36.605592 | orchestrator | 2025-07-12 20:14:36 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:36.608998 | orchestrator | 2025-07-12 20:14:36 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:36.620555 | orchestrator | 2025-07-12 20:14:36 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:36.620616 | orchestrator | 2025-07-12 20:14:36 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:36.626577 | orchestrator | 2025-07-12 20:14:36 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:36.626649 | orchestrator | 2025-07-12 20:14:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:39.687402 | orchestrator | 2025-07-12 20:14:39 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:39.688544 | orchestrator | 2025-07-12 20:14:39 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:39.689873 | orchestrator | 2025-07-12 20:14:39 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:39.690889 | orchestrator | 2025-07-12 20:14:39 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:39.692441 | orchestrator | 2025-07-12 20:14:39 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:39.692465 | orchestrator | 2025-07-12 20:14:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:42.738854 | orchestrator | 2025-07-12 20:14:42 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state STARTED 2025-07-12 20:14:42.740077 | orchestrator | 2025-07-12 20:14:42 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:42.744618 | orchestrator | 2025-07-12 20:14:42 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:42.749894 | orchestrator | 2025-07-12 20:14:42 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:42.753247 | orchestrator | 2025-07-12 20:14:42 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:42.753545 | orchestrator | 2025-07-12 20:14:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:45.797561 | orchestrator | 2025-07-12 20:14:45.797653 | orchestrator | 2025-07-12 20:14:45.797671 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-07-12 20:14:45.797683 | orchestrator | 2025-07-12 20:14:45.797694 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-07-12 20:14:45.797706 | orchestrator | Saturday 12 July 2025 20:13:27 +0000 (0:00:00.762) 0:00:00.762 ********* 2025-07-12 20:14:45.797717 | orchestrator | ok: [testbed-manager] => { 2025-07-12 20:14:45.797730 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-07-12 20:14:45.797742 | orchestrator | } 2025-07-12 20:14:45.797754 | orchestrator | 2025-07-12 20:14:45.797764 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-07-12 20:14:45.797775 | orchestrator | Saturday 12 July 2025 20:13:28 +0000 (0:00:00.478) 0:00:01.240 ********* 2025-07-12 20:14:45.797786 | orchestrator | ok: [testbed-manager] 2025-07-12 20:14:45.797797 | orchestrator | 2025-07-12 20:14:45.797808 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-07-12 20:14:45.797819 | orchestrator | Saturday 12 July 2025 20:13:29 +0000 (0:00:01.741) 0:00:02.982 ********* 2025-07-12 20:14:45.797837 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-07-12 20:14:45.797848 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-07-12 20:14:45.797859 | orchestrator | 2025-07-12 20:14:45.797870 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-07-12 20:14:45.797896 | orchestrator | Saturday 12 July 2025 20:13:31 +0000 (0:00:01.708) 0:00:04.691 ********* 2025-07-12 20:14:45.797908 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.797919 | orchestrator | 2025-07-12 20:14:45.797930 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-07-12 20:14:45.797941 | orchestrator | Saturday 12 July 2025 20:13:33 +0000 (0:00:02.145) 0:00:06.837 ********* 2025-07-12 20:14:45.797951 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.797962 | orchestrator | 2025-07-12 20:14:45.797973 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-07-12 20:14:45.797984 | orchestrator | Saturday 12 July 2025 20:13:36 +0000 (0:00:02.435) 0:00:09.273 ********* 2025-07-12 20:14:45.797995 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-07-12 20:14:45.798104 | orchestrator | ok: [testbed-manager] 2025-07-12 20:14:45.798128 | orchestrator | 2025-07-12 20:14:45.798146 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-07-12 20:14:45.798163 | orchestrator | Saturday 12 July 2025 20:14:03 +0000 (0:00:27.216) 0:00:36.489 ********* 2025-07-12 20:14:45.798181 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.798199 | orchestrator | 2025-07-12 20:14:45.798217 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:14:45.798240 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:14:45.798260 | orchestrator | 2025-07-12 20:14:45.798280 | orchestrator | 2025-07-12 20:14:45.798298 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:14:45.798316 | orchestrator | Saturday 12 July 2025 20:14:05 +0000 (0:00:01.742) 0:00:38.232 ********* 2025-07-12 20:14:45.798334 | orchestrator | =============================================================================== 2025-07-12 20:14:45.798349 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.22s 2025-07-12 20:14:45.798359 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 2.44s 2025-07-12 20:14:45.798370 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.15s 2025-07-12 20:14:45.798381 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 1.74s 2025-07-12 20:14:45.798391 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.74s 2025-07-12 20:14:45.798402 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.71s 2025-07-12 20:14:45.798413 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.48s 2025-07-12 20:14:45.798423 | orchestrator | 2025-07-12 20:14:45.798434 | orchestrator | 2025-07-12 20:14:45.798445 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-07-12 20:14:45.798455 | orchestrator | 2025-07-12 20:14:45.798466 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-07-12 20:14:45.798476 | orchestrator | Saturday 12 July 2025 20:13:29 +0000 (0:00:00.986) 0:00:00.986 ********* 2025-07-12 20:14:45.798488 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-07-12 20:14:45.798499 | orchestrator | 2025-07-12 20:14:45.798510 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-07-12 20:14:45.798520 | orchestrator | Saturday 12 July 2025 20:13:30 +0000 (0:00:00.714) 0:00:01.701 ********* 2025-07-12 20:14:45.798531 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-07-12 20:14:45.798542 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-07-12 20:14:45.798552 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-07-12 20:14:45.798563 | orchestrator | 2025-07-12 20:14:45.798574 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-07-12 20:14:45.798584 | orchestrator | Saturday 12 July 2025 20:13:32 +0000 (0:00:02.482) 0:00:04.184 ********* 2025-07-12 20:14:45.798596 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.798606 | orchestrator | 2025-07-12 20:14:45.798617 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-07-12 20:14:45.798628 | orchestrator | Saturday 12 July 2025 20:13:34 +0000 (0:00:01.899) 0:00:06.084 ********* 2025-07-12 20:14:45.798656 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-07-12 20:14:45.798668 | orchestrator | ok: [testbed-manager] 2025-07-12 20:14:45.798678 | orchestrator | 2025-07-12 20:14:45.798689 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-07-12 20:14:45.798700 | orchestrator | Saturday 12 July 2025 20:14:14 +0000 (0:00:40.092) 0:00:46.177 ********* 2025-07-12 20:14:45.798721 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.798732 | orchestrator | 2025-07-12 20:14:45.798743 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-07-12 20:14:45.798753 | orchestrator | Saturday 12 July 2025 20:14:16 +0000 (0:00:01.416) 0:00:47.593 ********* 2025-07-12 20:14:45.798764 | orchestrator | ok: [testbed-manager] 2025-07-12 20:14:45.798775 | orchestrator | 2025-07-12 20:14:45.798785 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-07-12 20:14:45.798796 | orchestrator | Saturday 12 July 2025 20:14:17 +0000 (0:00:01.342) 0:00:48.935 ********* 2025-07-12 20:14:45.798807 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.798817 | orchestrator | 2025-07-12 20:14:45.798828 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-07-12 20:14:45.798838 | orchestrator | Saturday 12 July 2025 20:14:20 +0000 (0:00:02.794) 0:00:51.730 ********* 2025-07-12 20:14:45.798855 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.798866 | orchestrator | 2025-07-12 20:14:45.798876 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-07-12 20:14:45.798887 | orchestrator | Saturday 12 July 2025 20:14:21 +0000 (0:00:00.866) 0:00:52.596 ********* 2025-07-12 20:14:45.798899 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.798911 | orchestrator | 2025-07-12 20:14:45.798923 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-07-12 20:14:45.798935 | orchestrator | Saturday 12 July 2025 20:14:21 +0000 (0:00:00.717) 0:00:53.314 ********* 2025-07-12 20:14:45.798947 | orchestrator | ok: [testbed-manager] 2025-07-12 20:14:45.798959 | orchestrator | 2025-07-12 20:14:45.798972 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:14:45.798984 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:14:45.798997 | orchestrator | 2025-07-12 20:14:45.799009 | orchestrator | 2025-07-12 20:14:45.799021 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:14:45.799033 | orchestrator | Saturday 12 July 2025 20:14:22 +0000 (0:00:00.402) 0:00:53.716 ********* 2025-07-12 20:14:45.799045 | orchestrator | =============================================================================== 2025-07-12 20:14:45.799079 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 40.09s 2025-07-12 20:14:45.799092 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.79s 2025-07-12 20:14:45.799105 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.48s 2025-07-12 20:14:45.799117 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.90s 2025-07-12 20:14:45.799129 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.42s 2025-07-12 20:14:45.799141 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.34s 2025-07-12 20:14:45.799152 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.87s 2025-07-12 20:14:45.799163 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.72s 2025-07-12 20:14:45.799186 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.71s 2025-07-12 20:14:45.799208 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.40s 2025-07-12 20:14:45.799219 | orchestrator | 2025-07-12 20:14:45.799229 | orchestrator | 2025-07-12 20:14:45.799240 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:14:45.799251 | orchestrator | 2025-07-12 20:14:45.799291 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:14:45.799302 | orchestrator | Saturday 12 July 2025 20:13:29 +0000 (0:00:00.905) 0:00:00.905 ********* 2025-07-12 20:14:45.799313 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-07-12 20:14:45.799323 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-07-12 20:14:45.799342 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-07-12 20:14:45.799353 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-07-12 20:14:45.799363 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-07-12 20:14:45.799374 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-07-12 20:14:45.799384 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-07-12 20:14:45.799395 | orchestrator | 2025-07-12 20:14:45.799406 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-07-12 20:14:45.799416 | orchestrator | 2025-07-12 20:14:45.799427 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-07-12 20:14:45.799438 | orchestrator | Saturday 12 July 2025 20:13:32 +0000 (0:00:02.993) 0:00:03.898 ********* 2025-07-12 20:14:45.799461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:14:45.799475 | orchestrator | 2025-07-12 20:14:45.799486 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-07-12 20:14:45.799497 | orchestrator | Saturday 12 July 2025 20:13:34 +0000 (0:00:02.660) 0:00:06.559 ********* 2025-07-12 20:14:45.799508 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:14:45.799518 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:14:45.799529 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:14:45.799540 | orchestrator | ok: [testbed-manager] 2025-07-12 20:14:45.799550 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:14:45.799569 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:14:45.799580 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:14:45.799591 | orchestrator | 2025-07-12 20:14:45.799602 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-07-12 20:14:45.799613 | orchestrator | Saturday 12 July 2025 20:13:39 +0000 (0:00:04.363) 0:00:10.922 ********* 2025-07-12 20:14:45.799623 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:14:45.799634 | orchestrator | ok: [testbed-manager] 2025-07-12 20:14:45.799644 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:14:45.799655 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:14:45.799665 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:14:45.799676 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:14:45.799687 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:14:45.799697 | orchestrator | 2025-07-12 20:14:45.799708 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-07-12 20:14:45.799719 | orchestrator | Saturday 12 July 2025 20:13:44 +0000 (0:00:05.274) 0:00:16.197 ********* 2025-07-12 20:14:45.799730 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.799741 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:45.799751 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:14:45.799762 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:14:45.799773 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:14:45.799783 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:14:45.799798 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:14:45.799809 | orchestrator | 2025-07-12 20:14:45.799820 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-07-12 20:14:45.799831 | orchestrator | Saturday 12 July 2025 20:13:47 +0000 (0:00:03.055) 0:00:19.253 ********* 2025-07-12 20:14:45.799842 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.799852 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:45.799863 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:14:45.799873 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:14:45.799884 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:14:45.799894 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:14:45.799905 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:14:45.799915 | orchestrator | 2025-07-12 20:14:45.799926 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-07-12 20:14:45.799937 | orchestrator | Saturday 12 July 2025 20:13:57 +0000 (0:00:10.061) 0:00:29.314 ********* 2025-07-12 20:14:45.799959 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:14:45.799977 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:45.799996 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:14:45.800015 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:14:45.800035 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:14:45.800073 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:14:45.800095 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.800113 | orchestrator | 2025-07-12 20:14:45.800132 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-07-12 20:14:45.800143 | orchestrator | Saturday 12 July 2025 20:14:20 +0000 (0:00:22.333) 0:00:51.649 ********* 2025-07-12 20:14:45.800155 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:14:45.800167 | orchestrator | 2025-07-12 20:14:45.800177 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-07-12 20:14:45.800188 | orchestrator | Saturday 12 July 2025 20:14:21 +0000 (0:00:01.894) 0:00:53.544 ********* 2025-07-12 20:14:45.800198 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-07-12 20:14:45.800209 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-07-12 20:14:45.800219 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-07-12 20:14:45.800230 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-07-12 20:14:45.800240 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-07-12 20:14:45.800251 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-07-12 20:14:45.800261 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-07-12 20:14:45.800272 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-07-12 20:14:45.800300 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-07-12 20:14:45.800311 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-07-12 20:14:45.800321 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-07-12 20:14:45.800332 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-07-12 20:14:45.800342 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-07-12 20:14:45.800353 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-07-12 20:14:45.800363 | orchestrator | 2025-07-12 20:14:45.800373 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-07-12 20:14:45.800385 | orchestrator | Saturday 12 July 2025 20:14:28 +0000 (0:00:06.205) 0:00:59.750 ********* 2025-07-12 20:14:45.800395 | orchestrator | ok: [testbed-manager] 2025-07-12 20:14:45.800406 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:14:45.800416 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:14:45.800427 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:14:45.800437 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:14:45.800447 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:14:45.800458 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:14:45.800468 | orchestrator | 2025-07-12 20:14:45.800479 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-07-12 20:14:45.800489 | orchestrator | Saturday 12 July 2025 20:14:30 +0000 (0:00:02.070) 0:01:01.820 ********* 2025-07-12 20:14:45.800500 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:45.800510 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.800521 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:14:45.800531 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:14:45.800542 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:14:45.800552 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:14:45.800562 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:14:45.800573 | orchestrator | 2025-07-12 20:14:45.800583 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-07-12 20:14:45.800611 | orchestrator | Saturday 12 July 2025 20:14:31 +0000 (0:00:01.758) 0:01:03.579 ********* 2025-07-12 20:14:45.800623 | orchestrator | ok: [testbed-manager] 2025-07-12 20:14:45.800633 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:14:45.800644 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:14:45.800654 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:14:45.800665 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:14:45.800675 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:14:45.800686 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:14:45.800696 | orchestrator | 2025-07-12 20:14:45.800707 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-07-12 20:14:45.800718 | orchestrator | Saturday 12 July 2025 20:14:33 +0000 (0:00:01.585) 0:01:05.165 ********* 2025-07-12 20:14:45.800729 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:14:45.800740 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:14:45.800750 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:14:45.800761 | orchestrator | ok: [testbed-manager] 2025-07-12 20:14:45.800771 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:14:45.800781 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:14:45.800792 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:14:45.800803 | orchestrator | 2025-07-12 20:14:45.800813 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-07-12 20:14:45.800824 | orchestrator | Saturday 12 July 2025 20:14:35 +0000 (0:00:02.370) 0:01:07.535 ********* 2025-07-12 20:14:45.800835 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-07-12 20:14:45.800846 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:14:45.800857 | orchestrator | 2025-07-12 20:14:45.800868 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-07-12 20:14:45.800879 | orchestrator | Saturday 12 July 2025 20:14:37 +0000 (0:00:01.861) 0:01:09.397 ********* 2025-07-12 20:14:45.800889 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.800900 | orchestrator | 2025-07-12 20:14:45.800911 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-07-12 20:14:45.800921 | orchestrator | Saturday 12 July 2025 20:14:40 +0000 (0:00:02.446) 0:01:11.843 ********* 2025-07-12 20:14:45.800932 | orchestrator | changed: [testbed-manager] 2025-07-12 20:14:45.800942 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:14:45.800953 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:14:45.800964 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:14:45.800974 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:14:45.800984 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:14:45.800995 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:14:45.801035 | orchestrator | 2025-07-12 20:14:45.801047 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:14:45.801122 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:14:45.801134 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:14:45.801145 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:14:45.801156 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:14:45.801167 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:14:45.801178 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:14:45.801201 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:14:45.801212 | orchestrator | 2025-07-12 20:14:45.801232 | orchestrator | 2025-07-12 20:14:45.801260 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:14:45.801284 | orchestrator | Saturday 12 July 2025 20:14:43 +0000 (0:00:03.395) 0:01:15.239 ********* 2025-07-12 20:14:45.801302 | orchestrator | =============================================================================== 2025-07-12 20:14:45.801321 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 22.33s 2025-07-12 20:14:45.801339 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.06s 2025-07-12 20:14:45.801378 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.21s 2025-07-12 20:14:45.801398 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 5.27s 2025-07-12 20:14:45.801412 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 4.36s 2025-07-12 20:14:45.801430 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.40s 2025-07-12 20:14:45.801440 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.06s 2025-07-12 20:14:45.801450 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.99s 2025-07-12 20:14:45.801459 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 2.66s 2025-07-12 20:14:45.801468 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.45s 2025-07-12 20:14:45.801478 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.37s 2025-07-12 20:14:45.801495 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 2.07s 2025-07-12 20:14:45.801505 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.89s 2025-07-12 20:14:45.801514 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.86s 2025-07-12 20:14:45.801524 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.76s 2025-07-12 20:14:45.801533 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.59s 2025-07-12 20:14:45.801543 | orchestrator | 2025-07-12 20:14:45 | INFO  | Task e454c8f3-4d37-4ad2-8d12-6db4c2976b18 is in state SUCCESS 2025-07-12 20:14:45.801552 | orchestrator | 2025-07-12 20:14:45 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:45.801695 | orchestrator | 2025-07-12 20:14:45 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:45.801710 | orchestrator | 2025-07-12 20:14:45 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:45.801725 | orchestrator | 2025-07-12 20:14:45 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:45.801735 | orchestrator | 2025-07-12 20:14:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:48.834115 | orchestrator | 2025-07-12 20:14:48 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:48.834214 | orchestrator | 2025-07-12 20:14:48 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:48.834507 | orchestrator | 2025-07-12 20:14:48 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:48.835755 | orchestrator | 2025-07-12 20:14:48 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:48.835776 | orchestrator | 2025-07-12 20:14:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:51.878264 | orchestrator | 2025-07-12 20:14:51 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:51.880426 | orchestrator | 2025-07-12 20:14:51 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:51.883017 | orchestrator | 2025-07-12 20:14:51 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:51.885411 | orchestrator | 2025-07-12 20:14:51 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:51.885922 | orchestrator | 2025-07-12 20:14:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:54.924806 | orchestrator | 2025-07-12 20:14:54 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:54.931518 | orchestrator | 2025-07-12 20:14:54 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:54.934299 | orchestrator | 2025-07-12 20:14:54 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:54.937354 | orchestrator | 2025-07-12 20:14:54 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:54.937894 | orchestrator | 2025-07-12 20:14:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:14:57.974848 | orchestrator | 2025-07-12 20:14:57 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:14:57.975978 | orchestrator | 2025-07-12 20:14:57 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:14:57.977941 | orchestrator | 2025-07-12 20:14:57 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:14:57.979471 | orchestrator | 2025-07-12 20:14:57 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:14:57.979621 | orchestrator | 2025-07-12 20:14:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:01.028766 | orchestrator | 2025-07-12 20:15:01 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:01.031957 | orchestrator | 2025-07-12 20:15:01 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:01.034086 | orchestrator | 2025-07-12 20:15:01 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:15:01.034887 | orchestrator | 2025-07-12 20:15:01 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:01.034923 | orchestrator | 2025-07-12 20:15:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:04.080911 | orchestrator | 2025-07-12 20:15:04 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:04.082316 | orchestrator | 2025-07-12 20:15:04 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:04.085987 | orchestrator | 2025-07-12 20:15:04 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:15:04.086101 | orchestrator | 2025-07-12 20:15:04 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:04.086115 | orchestrator | 2025-07-12 20:15:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:07.131428 | orchestrator | 2025-07-12 20:15:07 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:07.131939 | orchestrator | 2025-07-12 20:15:07 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:07.133799 | orchestrator | 2025-07-12 20:15:07 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:15:07.134665 | orchestrator | 2025-07-12 20:15:07 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:07.136498 | orchestrator | 2025-07-12 20:15:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:10.191370 | orchestrator | 2025-07-12 20:15:10 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:10.191512 | orchestrator | 2025-07-12 20:15:10 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:10.191527 | orchestrator | 2025-07-12 20:15:10 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:15:10.191620 | orchestrator | 2025-07-12 20:15:10 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:10.191638 | orchestrator | 2025-07-12 20:15:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:13.240531 | orchestrator | 2025-07-12 20:15:13 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:13.240830 | orchestrator | 2025-07-12 20:15:13 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:13.242449 | orchestrator | 2025-07-12 20:15:13 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:15:13.243276 | orchestrator | 2025-07-12 20:15:13 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:13.243722 | orchestrator | 2025-07-12 20:15:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:16.287824 | orchestrator | 2025-07-12 20:15:16 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:16.291157 | orchestrator | 2025-07-12 20:15:16 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:16.293731 | orchestrator | 2025-07-12 20:15:16 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:15:16.295692 | orchestrator | 2025-07-12 20:15:16 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:16.295741 | orchestrator | 2025-07-12 20:15:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:19.342831 | orchestrator | 2025-07-12 20:15:19 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:19.344737 | orchestrator | 2025-07-12 20:15:19 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:19.347490 | orchestrator | 2025-07-12 20:15:19 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state STARTED 2025-07-12 20:15:19.348216 | orchestrator | 2025-07-12 20:15:19 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:19.348236 | orchestrator | 2025-07-12 20:15:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:22.403535 | orchestrator | 2025-07-12 20:15:22 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:22.406383 | orchestrator | 2025-07-12 20:15:22 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:22.406469 | orchestrator | 2025-07-12 20:15:22 | INFO  | Task 1849c926-8d7b-4099-a728-d96893d2057c is in state SUCCESS 2025-07-12 20:15:22.407956 | orchestrator | 2025-07-12 20:15:22 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:22.408006 | orchestrator | 2025-07-12 20:15:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:25.448251 | orchestrator | 2025-07-12 20:15:25 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:25.449738 | orchestrator | 2025-07-12 20:15:25 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:25.452171 | orchestrator | 2025-07-12 20:15:25 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:25.453247 | orchestrator | 2025-07-12 20:15:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:28.508109 | orchestrator | 2025-07-12 20:15:28 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:28.510264 | orchestrator | 2025-07-12 20:15:28 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:28.513150 | orchestrator | 2025-07-12 20:15:28 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:28.513202 | orchestrator | 2025-07-12 20:15:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:31.591190 | orchestrator | 2025-07-12 20:15:31 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:31.594925 | orchestrator | 2025-07-12 20:15:31 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:31.595016 | orchestrator | 2025-07-12 20:15:31 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:31.595083 | orchestrator | 2025-07-12 20:15:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:34.645185 | orchestrator | 2025-07-12 20:15:34 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:34.648633 | orchestrator | 2025-07-12 20:15:34 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:34.648706 | orchestrator | 2025-07-12 20:15:34 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:34.648718 | orchestrator | 2025-07-12 20:15:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:37.703234 | orchestrator | 2025-07-12 20:15:37 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:37.704997 | orchestrator | 2025-07-12 20:15:37 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:37.707065 | orchestrator | 2025-07-12 20:15:37 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:37.707099 | orchestrator | 2025-07-12 20:15:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:40.775600 | orchestrator | 2025-07-12 20:15:40 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:40.777137 | orchestrator | 2025-07-12 20:15:40 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:40.777887 | orchestrator | 2025-07-12 20:15:40 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:40.777924 | orchestrator | 2025-07-12 20:15:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:43.814547 | orchestrator | 2025-07-12 20:15:43 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:43.817300 | orchestrator | 2025-07-12 20:15:43 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:43.818210 | orchestrator | 2025-07-12 20:15:43 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:43.818241 | orchestrator | 2025-07-12 20:15:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:46.863835 | orchestrator | 2025-07-12 20:15:46 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:46.866875 | orchestrator | 2025-07-12 20:15:46 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:46.869861 | orchestrator | 2025-07-12 20:15:46 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:46.869927 | orchestrator | 2025-07-12 20:15:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:49.928354 | orchestrator | 2025-07-12 20:15:49 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:49.930241 | orchestrator | 2025-07-12 20:15:49 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:49.931688 | orchestrator | 2025-07-12 20:15:49 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:49.931718 | orchestrator | 2025-07-12 20:15:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:52.977714 | orchestrator | 2025-07-12 20:15:52 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:52.979637 | orchestrator | 2025-07-12 20:15:52 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:52.981658 | orchestrator | 2025-07-12 20:15:52 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:52.981705 | orchestrator | 2025-07-12 20:15:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:56.152885 | orchestrator | 2025-07-12 20:15:56 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:56.155045 | orchestrator | 2025-07-12 20:15:56 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:56.156306 | orchestrator | 2025-07-12 20:15:56 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:56.156673 | orchestrator | 2025-07-12 20:15:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:15:59.213150 | orchestrator | 2025-07-12 20:15:59 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:15:59.214304 | orchestrator | 2025-07-12 20:15:59 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:15:59.216096 | orchestrator | 2025-07-12 20:15:59 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:15:59.216381 | orchestrator | 2025-07-12 20:15:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:02.259220 | orchestrator | 2025-07-12 20:16:02 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:02.261571 | orchestrator | 2025-07-12 20:16:02 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:02.265072 | orchestrator | 2025-07-12 20:16:02 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:16:02.265112 | orchestrator | 2025-07-12 20:16:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:05.304164 | orchestrator | 2025-07-12 20:16:05 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:05.306635 | orchestrator | 2025-07-12 20:16:05 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:05.308155 | orchestrator | 2025-07-12 20:16:05 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:16:05.308192 | orchestrator | 2025-07-12 20:16:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:08.350523 | orchestrator | 2025-07-12 20:16:08 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:08.351618 | orchestrator | 2025-07-12 20:16:08 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:08.353971 | orchestrator | 2025-07-12 20:16:08 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state STARTED 2025-07-12 20:16:08.354245 | orchestrator | 2025-07-12 20:16:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:11.389077 | orchestrator | 2025-07-12 20:16:11 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:11.391328 | orchestrator | 2025-07-12 20:16:11 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:11.391922 | orchestrator | 2025-07-12 20:16:11 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:11.392524 | orchestrator | 2025-07-12 20:16:11 | INFO  | Task 3492ecc3-11ad-4d37-a18c-e808b6786f1a is in state STARTED 2025-07-12 20:16:11.393247 | orchestrator | 2025-07-12 20:16:11 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:11.395138 | orchestrator | 2025-07-12 20:16:11 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:11.398398 | orchestrator | 2025-07-12 20:16:11 | INFO  | Task 05d21789-6ab7-458a-be78-40cb4e927d61 is in state SUCCESS 2025-07-12 20:16:11.400148 | orchestrator | 2025-07-12 20:16:11.400191 | orchestrator | 2025-07-12 20:16:11.400204 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-07-12 20:16:11.400216 | orchestrator | 2025-07-12 20:16:11.400227 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-07-12 20:16:11.400238 | orchestrator | Saturday 12 July 2025 20:13:57 +0000 (0:00:00.234) 0:00:00.234 ********* 2025-07-12 20:16:11.400250 | orchestrator | ok: [testbed-manager] 2025-07-12 20:16:11.400261 | orchestrator | 2025-07-12 20:16:11.400272 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-07-12 20:16:11.400283 | orchestrator | Saturday 12 July 2025 20:13:58 +0000 (0:00:00.998) 0:00:01.232 ********* 2025-07-12 20:16:11.400294 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-07-12 20:16:11.400305 | orchestrator | 2025-07-12 20:16:11.400316 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-07-12 20:16:11.400327 | orchestrator | Saturday 12 July 2025 20:13:59 +0000 (0:00:01.299) 0:00:02.531 ********* 2025-07-12 20:16:11.400338 | orchestrator | changed: [testbed-manager] 2025-07-12 20:16:11.400349 | orchestrator | 2025-07-12 20:16:11.400359 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-07-12 20:16:11.400370 | orchestrator | Saturday 12 July 2025 20:14:01 +0000 (0:00:01.521) 0:00:04.052 ********* 2025-07-12 20:16:11.400381 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-07-12 20:16:11.400392 | orchestrator | ok: [testbed-manager] 2025-07-12 20:16:11.400403 | orchestrator | 2025-07-12 20:16:11.400413 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-07-12 20:16:11.400424 | orchestrator | Saturday 12 July 2025 20:14:56 +0000 (0:00:55.276) 0:00:59.329 ********* 2025-07-12 20:16:11.400435 | orchestrator | changed: [testbed-manager] 2025-07-12 20:16:11.400445 | orchestrator | 2025-07-12 20:16:11.400456 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:16:11.400467 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:11.400479 | orchestrator | 2025-07-12 20:16:11.400490 | orchestrator | 2025-07-12 20:16:11.400501 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:16:11.400512 | orchestrator | Saturday 12 July 2025 20:15:20 +0000 (0:00:23.953) 0:01:23.282 ********* 2025-07-12 20:16:11.400529 | orchestrator | =============================================================================== 2025-07-12 20:16:11.400540 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 55.28s 2025-07-12 20:16:11.400551 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ----------------- 23.95s 2025-07-12 20:16:11.400561 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.52s 2025-07-12 20:16:11.400636 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.30s 2025-07-12 20:16:11.400665 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.00s 2025-07-12 20:16:11.400676 | orchestrator | 2025-07-12 20:16:11.400687 | orchestrator | 2025-07-12 20:16:11.400697 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-07-12 20:16:11.400708 | orchestrator | 2025-07-12 20:16:11.400719 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-12 20:16:11.400730 | orchestrator | Saturday 12 July 2025 20:13:16 +0000 (0:00:00.332) 0:00:00.332 ********* 2025-07-12 20:16:11.400806 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:16:11.400822 | orchestrator | 2025-07-12 20:16:11.400834 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-07-12 20:16:11.400846 | orchestrator | Saturday 12 July 2025 20:13:18 +0000 (0:00:01.647) 0:00:01.979 ********* 2025-07-12 20:16:11.400882 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 20:16:11.400896 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 20:16:11.400908 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 20:16:11.400920 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 20:16:11.400932 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 20:16:11.400944 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 20:16:11.400956 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 20:16:11.400968 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 20:16:11.400980 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 20:16:11.400993 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 20:16:11.401030 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 20:16:11.401042 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 20:16:11.401052 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-07-12 20:16:11.401064 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 20:16:11.401075 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 20:16:11.401086 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 20:16:11.401110 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 20:16:11.401121 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-07-12 20:16:11.401132 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 20:16:11.401143 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 20:16:11.401154 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-07-12 20:16:11.401165 | orchestrator | 2025-07-12 20:16:11.401176 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-07-12 20:16:11.401187 | orchestrator | Saturday 12 July 2025 20:13:24 +0000 (0:00:06.117) 0:00:08.097 ********* 2025-07-12 20:16:11.401198 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:16:11.401210 | orchestrator | 2025-07-12 20:16:11.401220 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-07-12 20:16:11.401231 | orchestrator | Saturday 12 July 2025 20:13:26 +0000 (0:00:02.022) 0:00:10.120 ********* 2025-07-12 20:16:11.401255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.401271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.401283 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.401295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.401306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.401340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401352 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.401375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401392 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401415 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.401427 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401455 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401480 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401532 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401553 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401616 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401635 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401654 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.401675 | orchestrator | 2025-07-12 20:16:11.401695 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-07-12 20:16:11.401723 | orchestrator | Saturday 12 July 2025 20:13:32 +0000 (0:00:06.631) 0:00:16.752 ********* 2025-07-12 20:16:11.401736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.401757 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.401775 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.401786 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:16:11.401798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.401810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.401821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.401833 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.401851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.401869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.401881 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.401897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.401909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.401920 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:11.401931 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:11.401943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.401954 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.401965 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.401989 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:11.402094 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:11.402111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.402123 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402151 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:11.402162 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.402173 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402196 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:11.402207 | orchestrator | 2025-07-12 20:16:11.402218 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-07-12 20:16:11.402236 | orchestrator | Saturday 12 July 2025 20:13:34 +0000 (0:00:01.386) 0:00:18.138 ********* 2025-07-12 20:16:11.402247 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.402267 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402279 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.402306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402328 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:16:11.402339 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:11.402350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.402368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.402410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402437 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:11.402448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.402460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402478 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402489 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:11.402500 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:11.402511 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.402546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402558 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402570 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:11.402581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-07-12 20:16:11.402592 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.402620 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:11.402631 | orchestrator | 2025-07-12 20:16:11.402642 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-07-12 20:16:11.402653 | orchestrator | Saturday 12 July 2025 20:13:37 +0000 (0:00:03.546) 0:00:21.684 ********* 2025-07-12 20:16:11.402664 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:16:11.402674 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:11.402685 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:11.402695 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:11.402706 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:11.402716 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:11.402727 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:11.402737 | orchestrator | 2025-07-12 20:16:11.402754 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-07-12 20:16:11.402765 | orchestrator | Saturday 12 July 2025 20:13:39 +0000 (0:00:01.693) 0:00:23.378 ********* 2025-07-12 20:16:11.402776 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:16:11.402786 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:16:11.402797 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:16:11.402807 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:16:11.402818 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:16:11.402828 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:16:11.402839 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:16:11.402850 | orchestrator | 2025-07-12 20:16:11.402860 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-07-12 20:16:11.402871 | orchestrator | Saturday 12 July 2025 20:13:40 +0000 (0:00:01.352) 0:00:24.731 ********* 2025-07-12 20:16:11.402894 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.402907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.402918 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.402938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.402955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.402967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.402978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.402990 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403026 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.403038 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.403049 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403072 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403083 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403106 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403123 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403162 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403192 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.403203 | orchestrator | 2025-07-12 20:16:11.403214 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-07-12 20:16:11.403225 | orchestrator | Saturday 12 July 2025 20:13:47 +0000 (0:00:06.358) 0:00:31.089 ********* 2025-07-12 20:16:11.403236 | orchestrator | [WARNING]: Skipped 2025-07-12 20:16:11.403247 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-07-12 20:16:11.403258 | orchestrator | to this access issue: 2025-07-12 20:16:11.403269 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-07-12 20:16:11.403279 | orchestrator | directory 2025-07-12 20:16:11.403290 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:16:11.403300 | orchestrator | 2025-07-12 20:16:11.403311 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-07-12 20:16:11.403322 | orchestrator | Saturday 12 July 2025 20:13:48 +0000 (0:00:01.206) 0:00:32.296 ********* 2025-07-12 20:16:11.403332 | orchestrator | [WARNING]: Skipped 2025-07-12 20:16:11.403343 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-07-12 20:16:11.403354 | orchestrator | to this access issue: 2025-07-12 20:16:11.403364 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-07-12 20:16:11.403375 | orchestrator | directory 2025-07-12 20:16:11.403385 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:16:11.403396 | orchestrator | 2025-07-12 20:16:11.403407 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-07-12 20:16:11.403417 | orchestrator | Saturday 12 July 2025 20:13:50 +0000 (0:00:01.805) 0:00:34.101 ********* 2025-07-12 20:16:11.403428 | orchestrator | [WARNING]: Skipped 2025-07-12 20:16:11.403439 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-07-12 20:16:11.403450 | orchestrator | to this access issue: 2025-07-12 20:16:11.403461 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-07-12 20:16:11.403471 | orchestrator | directory 2025-07-12 20:16:11.403483 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:16:11.403494 | orchestrator | 2025-07-12 20:16:11.403510 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-07-12 20:16:11.403522 | orchestrator | Saturday 12 July 2025 20:13:51 +0000 (0:00:01.430) 0:00:35.531 ********* 2025-07-12 20:16:11.403533 | orchestrator | [WARNING]: Skipped 2025-07-12 20:16:11.403544 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-07-12 20:16:11.403555 | orchestrator | to this access issue: 2025-07-12 20:16:11.403565 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-07-12 20:16:11.403583 | orchestrator | directory 2025-07-12 20:16:11.403594 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:16:11.403605 | orchestrator | 2025-07-12 20:16:11.403615 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-07-12 20:16:11.403626 | orchestrator | Saturday 12 July 2025 20:13:53 +0000 (0:00:01.555) 0:00:37.087 ********* 2025-07-12 20:16:11.403637 | orchestrator | changed: [testbed-manager] 2025-07-12 20:16:11.403647 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:11.403658 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:11.403669 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:11.403679 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:16:11.403690 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:16:11.403700 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:16:11.403711 | orchestrator | 2025-07-12 20:16:11.403722 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-07-12 20:16:11.403733 | orchestrator | Saturday 12 July 2025 20:13:58 +0000 (0:00:04.860) 0:00:41.947 ********* 2025-07-12 20:16:11.403743 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 20:16:11.403754 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 20:16:11.403765 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 20:16:11.403776 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 20:16:11.403787 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 20:16:11.403803 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 20:16:11.403814 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-07-12 20:16:11.403825 | orchestrator | 2025-07-12 20:16:11.403835 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-07-12 20:16:11.403846 | orchestrator | Saturday 12 July 2025 20:14:02 +0000 (0:00:04.519) 0:00:46.467 ********* 2025-07-12 20:16:11.403857 | orchestrator | changed: [testbed-manager] 2025-07-12 20:16:11.403868 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:11.403879 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:11.403889 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:11.403900 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:16:11.403910 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:16:11.403921 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:16:11.403931 | orchestrator | 2025-07-12 20:16:11.403942 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-07-12 20:16:11.403953 | orchestrator | Saturday 12 July 2025 20:14:05 +0000 (0:00:03.187) 0:00:49.654 ********* 2025-07-12 20:16:11.403964 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.403976 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.404009 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404038 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.404062 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.404089 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.404118 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404136 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404148 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404159 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404170 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404186 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.404197 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.404208 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404225 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:16:11.404247 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404259 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404270 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404281 | orchestrator | 2025-07-12 20:16:11.404292 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-07-12 20:16:11.404303 | orchestrator | Saturday 12 July 2025 20:14:08 +0000 (0:00:03.276) 0:00:52.931 ********* 2025-07-12 20:16:11.404314 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 20:16:11.404325 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 20:16:11.404340 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 20:16:11.404351 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 20:16:11.404362 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 20:16:11.404373 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 20:16:11.404384 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-07-12 20:16:11.404394 | orchestrator | 2025-07-12 20:16:11.404405 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-07-12 20:16:11.404416 | orchestrator | Saturday 12 July 2025 20:14:12 +0000 (0:00:03.255) 0:00:56.186 ********* 2025-07-12 20:16:11.404427 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 20:16:11.404438 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 20:16:11.404455 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 20:16:11.404466 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 20:16:11.404477 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 20:16:11.404488 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 20:16:11.404498 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-07-12 20:16:11.404509 | orchestrator | 2025-07-12 20:16:11.404520 | orchestrator | TASK [common : Check common containers] **************************************** 2025-07-12 20:16:11.404531 | orchestrator | Saturday 12 July 2025 20:14:15 +0000 (0:00:03.087) 0:00:59.273 ********* 2025-07-12 20:16:11.404542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404554 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404585 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404608 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404631 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404714 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404743 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-07-12 20:16:11.404755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404767 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404785 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404797 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404824 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:16:11.404849 | orchestrator | 2025-07-12 20:16:11.404868 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-07-12 20:16:11.404887 | orchestrator | Saturday 12 July 2025 20:14:21 +0000 (0:00:05.970) 0:01:05.244 ********* 2025-07-12 20:16:11.404905 | orchestrator | changed: [testbed-manager] 2025-07-12 20:16:11.404923 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:11.404942 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:11.404960 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:11.404979 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:16:11.405055 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:16:11.405075 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:16:11.405086 | orchestrator | 2025-07-12 20:16:11.405096 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-07-12 20:16:11.405107 | orchestrator | Saturday 12 July 2025 20:14:23 +0000 (0:00:01.875) 0:01:07.120 ********* 2025-07-12 20:16:11.405118 | orchestrator | changed: [testbed-manager] 2025-07-12 20:16:11.405128 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:11.405139 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:11.405149 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:11.405160 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:16:11.405170 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:16:11.405181 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:16:11.405192 | orchestrator | 2025-07-12 20:16:11.405202 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 20:16:11.405213 | orchestrator | Saturday 12 July 2025 20:14:24 +0000 (0:00:01.585) 0:01:08.705 ********* 2025-07-12 20:16:11.405223 | orchestrator | 2025-07-12 20:16:11.405234 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 20:16:11.405245 | orchestrator | Saturday 12 July 2025 20:14:24 +0000 (0:00:00.069) 0:01:08.774 ********* 2025-07-12 20:16:11.405256 | orchestrator | 2025-07-12 20:16:11.405267 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 20:16:11.405278 | orchestrator | Saturday 12 July 2025 20:14:24 +0000 (0:00:00.066) 0:01:08.841 ********* 2025-07-12 20:16:11.405288 | orchestrator | 2025-07-12 20:16:11.405299 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 20:16:11.405310 | orchestrator | Saturday 12 July 2025 20:14:24 +0000 (0:00:00.070) 0:01:08.912 ********* 2025-07-12 20:16:11.405320 | orchestrator | 2025-07-12 20:16:11.405331 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 20:16:11.405342 | orchestrator | Saturday 12 July 2025 20:14:25 +0000 (0:00:00.087) 0:01:09.000 ********* 2025-07-12 20:16:11.405352 | orchestrator | 2025-07-12 20:16:11.405363 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 20:16:11.405374 | orchestrator | Saturday 12 July 2025 20:14:25 +0000 (0:00:00.233) 0:01:09.234 ********* 2025-07-12 20:16:11.405384 | orchestrator | 2025-07-12 20:16:11.405395 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-07-12 20:16:11.405405 | orchestrator | Saturday 12 July 2025 20:14:25 +0000 (0:00:00.079) 0:01:09.313 ********* 2025-07-12 20:16:11.405416 | orchestrator | 2025-07-12 20:16:11.405427 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-07-12 20:16:11.405438 | orchestrator | Saturday 12 July 2025 20:14:25 +0000 (0:00:00.157) 0:01:09.470 ********* 2025-07-12 20:16:11.405458 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:11.405469 | orchestrator | changed: [testbed-manager] 2025-07-12 20:16:11.405480 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:16:11.405491 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:11.405511 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:16:11.405521 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:11.405532 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:16:11.405543 | orchestrator | 2025-07-12 20:16:11.405553 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-07-12 20:16:11.405564 | orchestrator | Saturday 12 July 2025 20:15:06 +0000 (0:00:41.215) 0:01:50.686 ********* 2025-07-12 20:16:11.405575 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:11.405584 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:16:11.405594 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:11.405603 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:16:11.405612 | orchestrator | changed: [testbed-manager] 2025-07-12 20:16:11.405622 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:16:11.405631 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:11.405641 | orchestrator | 2025-07-12 20:16:11.405650 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-07-12 20:16:11.405660 | orchestrator | Saturday 12 July 2025 20:15:58 +0000 (0:00:51.475) 0:02:42.162 ********* 2025-07-12 20:16:11.405670 | orchestrator | ok: [testbed-manager] 2025-07-12 20:16:11.405679 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:16:11.405689 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:16:11.405698 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:16:11.405708 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:16:11.405717 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:16:11.405726 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:16:11.405736 | orchestrator | 2025-07-12 20:16:11.405745 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-07-12 20:16:11.405755 | orchestrator | Saturday 12 July 2025 20:16:00 +0000 (0:00:01.853) 0:02:44.015 ********* 2025-07-12 20:16:11.405765 | orchestrator | changed: [testbed-manager] 2025-07-12 20:16:11.405774 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:16:11.405784 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:11.405793 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:11.405802 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:11.405812 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:16:11.405821 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:16:11.405831 | orchestrator | 2025-07-12 20:16:11.405840 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:16:11.405856 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 20:16:11.405867 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 20:16:11.405877 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 20:16:11.405887 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 20:16:11.405896 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 20:16:11.405906 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 20:16:11.405916 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-07-12 20:16:11.405925 | orchestrator | 2025-07-12 20:16:11.405935 | orchestrator | 2025-07-12 20:16:11.405944 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:16:11.405954 | orchestrator | Saturday 12 July 2025 20:16:09 +0000 (0:00:08.928) 0:02:52.943 ********* 2025-07-12 20:16:11.405964 | orchestrator | =============================================================================== 2025-07-12 20:16:11.405978 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 51.48s 2025-07-12 20:16:11.405988 | orchestrator | common : Restart fluentd container ------------------------------------- 41.22s 2025-07-12 20:16:11.406063 | orchestrator | common : Restart cron container ----------------------------------------- 8.93s 2025-07-12 20:16:11.406078 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 6.63s 2025-07-12 20:16:11.406087 | orchestrator | common : Copying over config.json files for services -------------------- 6.36s 2025-07-12 20:16:11.406097 | orchestrator | common : Ensuring config directories exist ------------------------------ 6.12s 2025-07-12 20:16:11.406107 | orchestrator | common : Check common containers ---------------------------------------- 5.97s 2025-07-12 20:16:11.406116 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.86s 2025-07-12 20:16:11.406126 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.52s 2025-07-12 20:16:11.406135 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.55s 2025-07-12 20:16:11.406144 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.28s 2025-07-12 20:16:11.406154 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 3.26s 2025-07-12 20:16:11.406164 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.19s 2025-07-12 20:16:11.406173 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.09s 2025-07-12 20:16:11.406190 | orchestrator | common : include_tasks -------------------------------------------------- 2.02s 2025-07-12 20:16:11.406200 | orchestrator | common : Creating log volume -------------------------------------------- 1.88s 2025-07-12 20:16:11.406210 | orchestrator | common : Initializing toolbox container using normal user --------------- 1.85s 2025-07-12 20:16:11.406219 | orchestrator | common : Find custom fluentd filter config files ------------------------ 1.81s 2025-07-12 20:16:11.406229 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.69s 2025-07-12 20:16:11.406238 | orchestrator | common : include_tasks -------------------------------------------------- 1.65s 2025-07-12 20:16:11.406248 | orchestrator | 2025-07-12 20:16:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:14.433784 | orchestrator | 2025-07-12 20:16:14 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:14.434409 | orchestrator | 2025-07-12 20:16:14 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:14.434894 | orchestrator | 2025-07-12 20:16:14 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:14.435717 | orchestrator | 2025-07-12 20:16:14 | INFO  | Task 3492ecc3-11ad-4d37-a18c-e808b6786f1a is in state STARTED 2025-07-12 20:16:14.438462 | orchestrator | 2025-07-12 20:16:14 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:14.439111 | orchestrator | 2025-07-12 20:16:14 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:14.439140 | orchestrator | 2025-07-12 20:16:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:17.474664 | orchestrator | 2025-07-12 20:16:17 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:17.474731 | orchestrator | 2025-07-12 20:16:17 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:17.474755 | orchestrator | 2025-07-12 20:16:17 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:17.474985 | orchestrator | 2025-07-12 20:16:17 | INFO  | Task 3492ecc3-11ad-4d37-a18c-e808b6786f1a is in state STARTED 2025-07-12 20:16:17.475792 | orchestrator | 2025-07-12 20:16:17 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:17.477096 | orchestrator | 2025-07-12 20:16:17 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:17.477116 | orchestrator | 2025-07-12 20:16:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:20.522368 | orchestrator | 2025-07-12 20:16:20 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:20.524888 | orchestrator | 2025-07-12 20:16:20 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:20.526382 | orchestrator | 2025-07-12 20:16:20 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:20.527403 | orchestrator | 2025-07-12 20:16:20 | INFO  | Task 3492ecc3-11ad-4d37-a18c-e808b6786f1a is in state STARTED 2025-07-12 20:16:20.529065 | orchestrator | 2025-07-12 20:16:20 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:20.529090 | orchestrator | 2025-07-12 20:16:20 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:20.529098 | orchestrator | 2025-07-12 20:16:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:23.593856 | orchestrator | 2025-07-12 20:16:23 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:23.593950 | orchestrator | 2025-07-12 20:16:23 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:23.593966 | orchestrator | 2025-07-12 20:16:23 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:23.593977 | orchestrator | 2025-07-12 20:16:23 | INFO  | Task 3492ecc3-11ad-4d37-a18c-e808b6786f1a is in state STARTED 2025-07-12 20:16:23.594083 | orchestrator | 2025-07-12 20:16:23 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:23.594573 | orchestrator | 2025-07-12 20:16:23 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:23.597570 | orchestrator | 2025-07-12 20:16:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:26.634085 | orchestrator | 2025-07-12 20:16:26 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:26.635398 | orchestrator | 2025-07-12 20:16:26 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:26.636745 | orchestrator | 2025-07-12 20:16:26 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:26.640983 | orchestrator | 2025-07-12 20:16:26 | INFO  | Task 3492ecc3-11ad-4d37-a18c-e808b6786f1a is in state STARTED 2025-07-12 20:16:26.641082 | orchestrator | 2025-07-12 20:16:26 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:26.645953 | orchestrator | 2025-07-12 20:16:26 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:26.646011 | orchestrator | 2025-07-12 20:16:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:29.694913 | orchestrator | 2025-07-12 20:16:29 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:29.699224 | orchestrator | 2025-07-12 20:16:29 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:29.701628 | orchestrator | 2025-07-12 20:16:29 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:29.705031 | orchestrator | 2025-07-12 20:16:29 | INFO  | Task 3492ecc3-11ad-4d37-a18c-e808b6786f1a is in state STARTED 2025-07-12 20:16:29.705613 | orchestrator | 2025-07-12 20:16:29 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:29.710960 | orchestrator | 2025-07-12 20:16:29 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:29.711115 | orchestrator | 2025-07-12 20:16:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:32.774450 | orchestrator | 2025-07-12 20:16:32 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:32.775106 | orchestrator | 2025-07-12 20:16:32 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:32.775863 | orchestrator | 2025-07-12 20:16:32 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:32.776317 | orchestrator | 2025-07-12 20:16:32 | INFO  | Task 3492ecc3-11ad-4d37-a18c-e808b6786f1a is in state SUCCESS 2025-07-12 20:16:32.780448 | orchestrator | 2025-07-12 20:16:32 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:32.780511 | orchestrator | 2025-07-12 20:16:32 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:32.780526 | orchestrator | 2025-07-12 20:16:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:35.839471 | orchestrator | 2025-07-12 20:16:35 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:35.839554 | orchestrator | 2025-07-12 20:16:35 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:35.841928 | orchestrator | 2025-07-12 20:16:35 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:35.842579 | orchestrator | 2025-07-12 20:16:35 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:35.843497 | orchestrator | 2025-07-12 20:16:35 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:35.844098 | orchestrator | 2025-07-12 20:16:35 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:16:35.846432 | orchestrator | 2025-07-12 20:16:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:38.936668 | orchestrator | 2025-07-12 20:16:38 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:38.937903 | orchestrator | 2025-07-12 20:16:38 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:38.942347 | orchestrator | 2025-07-12 20:16:38 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:38.942474 | orchestrator | 2025-07-12 20:16:38 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:38.947019 | orchestrator | 2025-07-12 20:16:38 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:38.950373 | orchestrator | 2025-07-12 20:16:38 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:16:38.950471 | orchestrator | 2025-07-12 20:16:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:42.016058 | orchestrator | 2025-07-12 20:16:42 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:42.017939 | orchestrator | 2025-07-12 20:16:42 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:42.021612 | orchestrator | 2025-07-12 20:16:42 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:42.023306 | orchestrator | 2025-07-12 20:16:42 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:42.027534 | orchestrator | 2025-07-12 20:16:42 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:42.027942 | orchestrator | 2025-07-12 20:16:42 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:16:42.027958 | orchestrator | 2025-07-12 20:16:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:45.088728 | orchestrator | 2025-07-12 20:16:45 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:45.093136 | orchestrator | 2025-07-12 20:16:45 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state STARTED 2025-07-12 20:16:45.093923 | orchestrator | 2025-07-12 20:16:45 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:45.094901 | orchestrator | 2025-07-12 20:16:45 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:45.095677 | orchestrator | 2025-07-12 20:16:45 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:45.098376 | orchestrator | 2025-07-12 20:16:45 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:16:45.098408 | orchestrator | 2025-07-12 20:16:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:48.136778 | orchestrator | 2025-07-12 20:16:48 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:48.136903 | orchestrator | 2025-07-12 20:16:48 | INFO  | Task 8537a98d-1c81-4bf3-9c25-ee024f9dc643 is in state SUCCESS 2025-07-12 20:16:48.138190 | orchestrator | 2025-07-12 20:16:48.138235 | orchestrator | 2025-07-12 20:16:48.138255 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:16:48.138267 | orchestrator | 2025-07-12 20:16:48.138278 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:16:48.138290 | orchestrator | Saturday 12 July 2025 20:16:14 +0000 (0:00:00.287) 0:00:00.287 ********* 2025-07-12 20:16:48.138301 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:16:48.138313 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:16:48.138324 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:16:48.138334 | orchestrator | 2025-07-12 20:16:48.138345 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:16:48.138372 | orchestrator | Saturday 12 July 2025 20:16:15 +0000 (0:00:00.427) 0:00:00.715 ********* 2025-07-12 20:16:48.138394 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-07-12 20:16:48.138405 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-07-12 20:16:48.138416 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-07-12 20:16:48.138427 | orchestrator | 2025-07-12 20:16:48.138437 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-07-12 20:16:48.138448 | orchestrator | 2025-07-12 20:16:48.138459 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-07-12 20:16:48.138470 | orchestrator | Saturday 12 July 2025 20:16:16 +0000 (0:00:00.824) 0:00:01.539 ********* 2025-07-12 20:16:48.138481 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:16:48.138493 | orchestrator | 2025-07-12 20:16:48.138503 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-07-12 20:16:48.138514 | orchestrator | Saturday 12 July 2025 20:16:16 +0000 (0:00:00.949) 0:00:02.488 ********* 2025-07-12 20:16:48.138525 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-12 20:16:48.138536 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-12 20:16:48.138547 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-12 20:16:48.138557 | orchestrator | 2025-07-12 20:16:48.138568 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-07-12 20:16:48.138579 | orchestrator | Saturday 12 July 2025 20:16:17 +0000 (0:00:00.806) 0:00:03.295 ********* 2025-07-12 20:16:48.138590 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-07-12 20:16:48.138668 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-07-12 20:16:48.138682 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-07-12 20:16:48.138694 | orchestrator | 2025-07-12 20:16:48.138704 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-07-12 20:16:48.138715 | orchestrator | Saturday 12 July 2025 20:16:20 +0000 (0:00:02.822) 0:00:06.118 ********* 2025-07-12 20:16:48.138726 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:48.138737 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:48.138748 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:48.138758 | orchestrator | 2025-07-12 20:16:48.138769 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-07-12 20:16:48.138780 | orchestrator | Saturday 12 July 2025 20:16:23 +0000 (0:00:03.232) 0:00:09.350 ********* 2025-07-12 20:16:48.138790 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:48.138801 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:48.138812 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:48.138822 | orchestrator | 2025-07-12 20:16:48.138833 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:16:48.138844 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:48.138856 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:48.138867 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:48.138878 | orchestrator | 2025-07-12 20:16:48.138889 | orchestrator | 2025-07-12 20:16:48.138899 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:16:48.138910 | orchestrator | Saturday 12 July 2025 20:16:31 +0000 (0:00:07.605) 0:00:16.956 ********* 2025-07-12 20:16:48.138921 | orchestrator | =============================================================================== 2025-07-12 20:16:48.138931 | orchestrator | memcached : Restart memcached container --------------------------------- 7.61s 2025-07-12 20:16:48.138942 | orchestrator | memcached : Check memcached container ----------------------------------- 3.23s 2025-07-12 20:16:48.138953 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.82s 2025-07-12 20:16:48.138983 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.95s 2025-07-12 20:16:48.138994 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2025-07-12 20:16:48.139005 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.81s 2025-07-12 20:16:48.139016 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-07-12 20:16:48.139026 | orchestrator | 2025-07-12 20:16:48.139037 | orchestrator | 2025-07-12 20:16:48.139048 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:16:48.139058 | orchestrator | 2025-07-12 20:16:48.139069 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:16:48.139080 | orchestrator | Saturday 12 July 2025 20:16:14 +0000 (0:00:00.327) 0:00:00.327 ********* 2025-07-12 20:16:48.139090 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:16:48.139101 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:16:48.139112 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:16:48.139122 | orchestrator | 2025-07-12 20:16:48.139133 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:16:48.139158 | orchestrator | Saturday 12 July 2025 20:16:14 +0000 (0:00:00.439) 0:00:00.766 ********* 2025-07-12 20:16:48.139176 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-07-12 20:16:48.139187 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-07-12 20:16:48.139198 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-07-12 20:16:48.139209 | orchestrator | 2025-07-12 20:16:48.139220 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-07-12 20:16:48.139239 | orchestrator | 2025-07-12 20:16:48.139250 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-07-12 20:16:48.139261 | orchestrator | Saturday 12 July 2025 20:16:15 +0000 (0:00:00.614) 0:00:01.381 ********* 2025-07-12 20:16:48.139272 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:16:48.139283 | orchestrator | 2025-07-12 20:16:48.139294 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-07-12 20:16:48.139305 | orchestrator | Saturday 12 July 2025 20:16:16 +0000 (0:00:00.840) 0:00:02.221 ********* 2025-07-12 20:16:48.139319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139419 | orchestrator | 2025-07-12 20:16:48.139431 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-07-12 20:16:48.139443 | orchestrator | Saturday 12 July 2025 20:16:17 +0000 (0:00:01.418) 0:00:03.640 ********* 2025-07-12 20:16:48.139455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139548 | orchestrator | 2025-07-12 20:16:48.139559 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-07-12 20:16:48.139570 | orchestrator | Saturday 12 July 2025 20:16:20 +0000 (0:00:03.032) 0:00:06.673 ********* 2025-07-12 20:16:48.139582 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139606 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139670 | orchestrator | 2025-07-12 20:16:48.139681 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-07-12 20:16:48.139693 | orchestrator | Saturday 12 July 2025 20:16:24 +0000 (0:00:04.193) 0:00:10.866 ********* 2025-07-12 20:16:48.139711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:7.0.15.20250711', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-07-12 20:16:48.139802 | orchestrator | 2025-07-12 20:16:48.139814 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 20:16:48.139825 | orchestrator | Saturday 12 July 2025 20:16:26 +0000 (0:00:02.142) 0:00:13.009 ********* 2025-07-12 20:16:48.139837 | orchestrator | 2025-07-12 20:16:48.139848 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 20:16:48.139860 | orchestrator | Saturday 12 July 2025 20:16:27 +0000 (0:00:00.169) 0:00:13.179 ********* 2025-07-12 20:16:48.139871 | orchestrator | 2025-07-12 20:16:48.139882 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-07-12 20:16:48.139893 | orchestrator | Saturday 12 July 2025 20:16:27 +0000 (0:00:00.076) 0:00:13.255 ********* 2025-07-12 20:16:48.139905 | orchestrator | 2025-07-12 20:16:48.139916 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-07-12 20:16:48.139927 | orchestrator | Saturday 12 July 2025 20:16:27 +0000 (0:00:00.184) 0:00:13.439 ********* 2025-07-12 20:16:48.139939 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:48.139950 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:48.139962 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:48.139988 | orchestrator | 2025-07-12 20:16:48.140000 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-07-12 20:16:48.140011 | orchestrator | Saturday 12 July 2025 20:16:37 +0000 (0:00:10.049) 0:00:23.493 ********* 2025-07-12 20:16:48.140023 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:16:48.140034 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:16:48.140046 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:16:48.140057 | orchestrator | 2025-07-12 20:16:48.140069 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:16:48.140081 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:48.140092 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:48.140104 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:16:48.140115 | orchestrator | 2025-07-12 20:16:48.140127 | orchestrator | 2025-07-12 20:16:48.140138 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:16:48.140150 | orchestrator | Saturday 12 July 2025 20:16:45 +0000 (0:00:08.320) 0:00:31.813 ********* 2025-07-12 20:16:48.140161 | orchestrator | =============================================================================== 2025-07-12 20:16:48.140173 | orchestrator | redis : Restart redis container ---------------------------------------- 10.05s 2025-07-12 20:16:48.140184 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.32s 2025-07-12 20:16:48.140196 | orchestrator | redis : Copying over redis config files --------------------------------- 4.19s 2025-07-12 20:16:48.140214 | orchestrator | redis : Copying over default config.json files -------------------------- 3.03s 2025-07-12 20:16:48.140226 | orchestrator | redis : Check redis containers ------------------------------------------ 2.14s 2025-07-12 20:16:48.140237 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.42s 2025-07-12 20:16:48.140248 | orchestrator | redis : include_tasks --------------------------------------------------- 0.84s 2025-07-12 20:16:48.140260 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2025-07-12 20:16:48.140271 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.44s 2025-07-12 20:16:48.140283 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.43s 2025-07-12 20:16:48.140380 | orchestrator | 2025-07-12 20:16:48 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:48.141144 | orchestrator | 2025-07-12 20:16:48 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:48.143934 | orchestrator | 2025-07-12 20:16:48 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:48.145819 | orchestrator | 2025-07-12 20:16:48 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:16:48.145849 | orchestrator | 2025-07-12 20:16:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:51.179461 | orchestrator | 2025-07-12 20:16:51 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:51.179514 | orchestrator | 2025-07-12 20:16:51 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:51.179520 | orchestrator | 2025-07-12 20:16:51 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:51.179526 | orchestrator | 2025-07-12 20:16:51 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:51.179530 | orchestrator | 2025-07-12 20:16:51 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:16:51.179534 | orchestrator | 2025-07-12 20:16:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:54.223141 | orchestrator | 2025-07-12 20:16:54 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:54.223831 | orchestrator | 2025-07-12 20:16:54 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:54.224726 | orchestrator | 2025-07-12 20:16:54 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:54.227121 | orchestrator | 2025-07-12 20:16:54 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:54.231044 | orchestrator | 2025-07-12 20:16:54 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:16:54.231101 | orchestrator | 2025-07-12 20:16:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:16:57.265909 | orchestrator | 2025-07-12 20:16:57 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:16:57.269088 | orchestrator | 2025-07-12 20:16:57 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:16:57.270780 | orchestrator | 2025-07-12 20:16:57 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:16:57.271409 | orchestrator | 2025-07-12 20:16:57 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:16:57.273402 | orchestrator | 2025-07-12 20:16:57 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:16:57.273426 | orchestrator | 2025-07-12 20:16:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:00.348872 | orchestrator | 2025-07-12 20:17:00 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:00.356190 | orchestrator | 2025-07-12 20:17:00 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:00.356744 | orchestrator | 2025-07-12 20:17:00 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:17:00.357128 | orchestrator | 2025-07-12 20:17:00 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:00.368117 | orchestrator | 2025-07-12 20:17:00 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:00.368199 | orchestrator | 2025-07-12 20:17:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:03.435602 | orchestrator | 2025-07-12 20:17:03 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:03.435997 | orchestrator | 2025-07-12 20:17:03 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:03.437037 | orchestrator | 2025-07-12 20:17:03 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:17:03.437325 | orchestrator | 2025-07-12 20:17:03 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:03.440105 | orchestrator | 2025-07-12 20:17:03 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:03.440134 | orchestrator | 2025-07-12 20:17:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:06.487216 | orchestrator | 2025-07-12 20:17:06 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:06.487558 | orchestrator | 2025-07-12 20:17:06 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:06.488346 | orchestrator | 2025-07-12 20:17:06 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:17:06.493978 | orchestrator | 2025-07-12 20:17:06 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:06.496052 | orchestrator | 2025-07-12 20:17:06 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:06.496069 | orchestrator | 2025-07-12 20:17:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:09.530378 | orchestrator | 2025-07-12 20:17:09 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:09.535390 | orchestrator | 2025-07-12 20:17:09 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:09.536038 | orchestrator | 2025-07-12 20:17:09 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:17:09.537062 | orchestrator | 2025-07-12 20:17:09 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:09.538461 | orchestrator | 2025-07-12 20:17:09 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:09.538509 | orchestrator | 2025-07-12 20:17:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:12.583087 | orchestrator | 2025-07-12 20:17:12 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:12.583304 | orchestrator | 2025-07-12 20:17:12 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:12.584071 | orchestrator | 2025-07-12 20:17:12 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:17:12.584548 | orchestrator | 2025-07-12 20:17:12 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:12.585372 | orchestrator | 2025-07-12 20:17:12 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:12.585405 | orchestrator | 2025-07-12 20:17:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:15.612013 | orchestrator | 2025-07-12 20:17:15 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:15.613740 | orchestrator | 2025-07-12 20:17:15 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:15.617488 | orchestrator | 2025-07-12 20:17:15 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state STARTED 2025-07-12 20:17:15.620633 | orchestrator | 2025-07-12 20:17:15 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:15.621450 | orchestrator | 2025-07-12 20:17:15 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:15.621722 | orchestrator | 2025-07-12 20:17:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:18.652857 | orchestrator | 2025-07-12 20:17:18 | INFO  | Task a142c0b5-9cc0-4a56-a9f9-0e23fc8a46e7 is in state STARTED 2025-07-12 20:17:18.653117 | orchestrator | 2025-07-12 20:17:18 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:18.653721 | orchestrator | 2025-07-12 20:17:18 | INFO  | Task 388bdfd7-cfb2-4eca-85ad-193f251291ce is in state STARTED 2025-07-12 20:17:18.654407 | orchestrator | 2025-07-12 20:17:18 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:18.657663 | orchestrator | 2025-07-12 20:17:18.657711 | orchestrator | 2025-07-12 20:17:18 | INFO  | Task 2f631813-6e31-4665-83ab-031fe4ef8c80 is in state SUCCESS 2025-07-12 20:17:18.663068 | orchestrator | 2025-07-12 20:17:18.663123 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-07-12 20:17:18.663141 | orchestrator | 2025-07-12 20:17:18.663152 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-07-12 20:17:18.663164 | orchestrator | Saturday 12 July 2025 20:13:17 +0000 (0:00:00.212) 0:00:00.212 ********* 2025-07-12 20:17:18.663175 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:17:18.663187 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:17:18.663198 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:17:18.663209 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.663219 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.663230 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.663241 | orchestrator | 2025-07-12 20:17:18.663252 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-07-12 20:17:18.663263 | orchestrator | Saturday 12 July 2025 20:13:18 +0000 (0:00:01.005) 0:00:01.217 ********* 2025-07-12 20:17:18.663274 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.663285 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.663296 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.663307 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.663318 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.663328 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.663339 | orchestrator | 2025-07-12 20:17:18.663350 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-07-12 20:17:18.663361 | orchestrator | Saturday 12 July 2025 20:13:19 +0000 (0:00:00.919) 0:00:02.137 ********* 2025-07-12 20:17:18.663372 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.663382 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.663393 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.663404 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.663415 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.663426 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.663437 | orchestrator | 2025-07-12 20:17:18.663448 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-07-12 20:17:18.663487 | orchestrator | Saturday 12 July 2025 20:13:20 +0000 (0:00:01.052) 0:00:03.189 ********* 2025-07-12 20:17:18.663499 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:18.663510 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:18.663520 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.663531 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.663541 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.663552 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:18.663562 | orchestrator | 2025-07-12 20:17:18.663573 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-07-12 20:17:18.663584 | orchestrator | Saturday 12 July 2025 20:13:22 +0000 (0:00:02.614) 0:00:05.804 ********* 2025-07-12 20:17:18.663594 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:18.663605 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:18.663617 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:18.663629 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.663641 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.663653 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.663665 | orchestrator | 2025-07-12 20:17:18.663677 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-07-12 20:17:18.663689 | orchestrator | Saturday 12 July 2025 20:13:24 +0000 (0:00:01.748) 0:00:07.552 ********* 2025-07-12 20:17:18.663714 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:18.663726 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:18.663738 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:18.663750 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.663761 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.663773 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.663785 | orchestrator | 2025-07-12 20:17:18.663796 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-07-12 20:17:18.663808 | orchestrator | Saturday 12 July 2025 20:13:25 +0000 (0:00:01.395) 0:00:08.948 ********* 2025-07-12 20:17:18.663820 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.663833 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.663845 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.663857 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.663868 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.663880 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.663891 | orchestrator | 2025-07-12 20:17:18.663903 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-07-12 20:17:18.663939 | orchestrator | Saturday 12 July 2025 20:13:26 +0000 (0:00:01.139) 0:00:10.088 ********* 2025-07-12 20:17:18.663954 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.663965 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.663977 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.663989 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.664000 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.664010 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.664020 | orchestrator | 2025-07-12 20:17:18.664031 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-07-12 20:17:18.664042 | orchestrator | Saturday 12 July 2025 20:13:27 +0000 (0:00:01.013) 0:00:11.102 ********* 2025-07-12 20:17:18.664052 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:17:18.664063 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:17:18.664073 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.664084 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:17:18.664094 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:17:18.664105 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.664116 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:17:18.664136 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:17:18.664147 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.664158 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:17:18.664182 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:17:18.664193 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.664204 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:17:18.664214 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:17:18.664225 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.664236 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:17:18.664246 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:17:18.664257 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.664268 | orchestrator | 2025-07-12 20:17:18.664278 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-07-12 20:17:18.664289 | orchestrator | Saturday 12 July 2025 20:13:29 +0000 (0:00:01.197) 0:00:12.299 ********* 2025-07-12 20:17:18.664300 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.664310 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.664321 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.664331 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.664342 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.664352 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.664363 | orchestrator | 2025-07-12 20:17:18.664373 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-07-12 20:17:18.664385 | orchestrator | Saturday 12 July 2025 20:13:31 +0000 (0:00:02.116) 0:00:14.416 ********* 2025-07-12 20:17:18.664396 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:17:18.664406 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:17:18.664417 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:17:18.664427 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.664438 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.664448 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.664459 | orchestrator | 2025-07-12 20:17:18.664470 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-07-12 20:17:18.664481 | orchestrator | Saturday 12 July 2025 20:13:32 +0000 (0:00:00.791) 0:00:15.207 ********* 2025-07-12 20:17:18.664491 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:18.664502 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.664512 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.664523 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.664533 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:18.664543 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:18.664554 | orchestrator | 2025-07-12 20:17:18.664565 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-07-12 20:17:18.664575 | orchestrator | Saturday 12 July 2025 20:13:37 +0000 (0:00:05.785) 0:00:20.993 ********* 2025-07-12 20:17:18.664586 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.664596 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.664606 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.664617 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.664627 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.664638 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.664648 | orchestrator | 2025-07-12 20:17:18.664659 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-07-12 20:17:18.664675 | orchestrator | Saturday 12 July 2025 20:13:39 +0000 (0:00:01.596) 0:00:22.589 ********* 2025-07-12 20:17:18.664686 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.664697 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.664707 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.664724 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.664735 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.664746 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.664756 | orchestrator | 2025-07-12 20:17:18.664767 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-07-12 20:17:18.664779 | orchestrator | Saturday 12 July 2025 20:13:41 +0000 (0:00:02.341) 0:00:24.930 ********* 2025-07-12 20:17:18.664790 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:17:18.664801 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:17:18.664811 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:17:18.664822 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.664832 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.664843 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.664853 | orchestrator | 2025-07-12 20:17:18.664864 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-07-12 20:17:18.664875 | orchestrator | Saturday 12 July 2025 20:13:43 +0000 (0:00:01.364) 0:00:26.294 ********* 2025-07-12 20:17:18.664885 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-07-12 20:17:18.664896 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-07-12 20:17:18.664906 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-07-12 20:17:18.664938 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-07-12 20:17:18.664951 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-07-12 20:17:18.664961 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-07-12 20:17:18.664972 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-07-12 20:17:18.664982 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-07-12 20:17:18.664993 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-07-12 20:17:18.665003 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-07-12 20:17:18.665014 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-07-12 20:17:18.665024 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-07-12 20:17:18.665035 | orchestrator | 2025-07-12 20:17:18.665045 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-07-12 20:17:18.665056 | orchestrator | Saturday 12 July 2025 20:13:45 +0000 (0:00:02.811) 0:00:29.106 ********* 2025-07-12 20:17:18.665066 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:18.665077 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:18.665087 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:18.665098 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.665108 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.665119 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.665129 | orchestrator | 2025-07-12 20:17:18.665146 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-07-12 20:17:18.665157 | orchestrator | 2025-07-12 20:17:18.665168 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-07-12 20:17:18.665178 | orchestrator | Saturday 12 July 2025 20:13:48 +0000 (0:00:02.334) 0:00:31.441 ********* 2025-07-12 20:17:18.665189 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.665199 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.665210 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.665220 | orchestrator | 2025-07-12 20:17:18.665238 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-07-12 20:17:18.665256 | orchestrator | Saturday 12 July 2025 20:13:50 +0000 (0:00:02.379) 0:00:33.820 ********* 2025-07-12 20:17:18.665274 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.665292 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.665310 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.665328 | orchestrator | 2025-07-12 20:17:18.665347 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-07-12 20:17:18.665365 | orchestrator | Saturday 12 July 2025 20:13:52 +0000 (0:00:01.479) 0:00:35.300 ********* 2025-07-12 20:17:18.665385 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.665413 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.665429 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.665440 | orchestrator | 2025-07-12 20:17:18.665451 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-07-12 20:17:18.665462 | orchestrator | Saturday 12 July 2025 20:13:53 +0000 (0:00:01.774) 0:00:37.074 ********* 2025-07-12 20:17:18.665472 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.665483 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.665494 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.665504 | orchestrator | 2025-07-12 20:17:18.665515 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-07-12 20:17:18.665525 | orchestrator | Saturday 12 July 2025 20:13:55 +0000 (0:00:01.122) 0:00:38.197 ********* 2025-07-12 20:17:18.665536 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.665546 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.665557 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.665567 | orchestrator | 2025-07-12 20:17:18.665578 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-07-12 20:17:18.665588 | orchestrator | Saturday 12 July 2025 20:13:55 +0000 (0:00:00.469) 0:00:38.666 ********* 2025-07-12 20:17:18.665599 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.665609 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.665620 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.665630 | orchestrator | 2025-07-12 20:17:18.665641 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-07-12 20:17:18.665651 | orchestrator | Saturday 12 July 2025 20:13:56 +0000 (0:00:00.877) 0:00:39.544 ********* 2025-07-12 20:17:18.665662 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.665673 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.665683 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.665693 | orchestrator | 2025-07-12 20:17:18.665704 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-07-12 20:17:18.665714 | orchestrator | Saturday 12 July 2025 20:13:58 +0000 (0:00:01.794) 0:00:41.338 ********* 2025-07-12 20:17:18.665732 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:17:18.665743 | orchestrator | 2025-07-12 20:17:18.665754 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-07-12 20:17:18.665764 | orchestrator | Saturday 12 July 2025 20:13:58 +0000 (0:00:00.759) 0:00:42.098 ********* 2025-07-12 20:17:18.665775 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.665785 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.665795 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.665806 | orchestrator | 2025-07-12 20:17:18.665817 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-07-12 20:17:18.665827 | orchestrator | Saturday 12 July 2025 20:14:01 +0000 (0:00:02.846) 0:00:44.944 ********* 2025-07-12 20:17:18.665838 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.665849 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.665859 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.665869 | orchestrator | 2025-07-12 20:17:18.665880 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-07-12 20:17:18.665891 | orchestrator | Saturday 12 July 2025 20:14:02 +0000 (0:00:01.083) 0:00:46.027 ********* 2025-07-12 20:17:18.665901 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.665912 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.665974 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.665985 | orchestrator | 2025-07-12 20:17:18.665996 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-07-12 20:17:18.666007 | orchestrator | Saturday 12 July 2025 20:14:04 +0000 (0:00:01.158) 0:00:47.185 ********* 2025-07-12 20:17:18.666065 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.666082 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.666101 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.666131 | orchestrator | 2025-07-12 20:17:18.666149 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-07-12 20:17:18.666167 | orchestrator | Saturday 12 July 2025 20:14:05 +0000 (0:00:01.492) 0:00:48.678 ********* 2025-07-12 20:17:18.666186 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.666204 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.666224 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.666242 | orchestrator | 2025-07-12 20:17:18.666262 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-07-12 20:17:18.666274 | orchestrator | Saturday 12 July 2025 20:14:06 +0000 (0:00:00.663) 0:00:49.341 ********* 2025-07-12 20:17:18.666284 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.666294 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.666305 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.666315 | orchestrator | 2025-07-12 20:17:18.666326 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-07-12 20:17:18.666336 | orchestrator | Saturday 12 July 2025 20:14:07 +0000 (0:00:00.890) 0:00:50.231 ********* 2025-07-12 20:17:18.666348 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.666358 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.666368 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.666379 | orchestrator | 2025-07-12 20:17:18.666405 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-07-12 20:17:18.666421 | orchestrator | Saturday 12 July 2025 20:14:09 +0000 (0:00:02.428) 0:00:52.660 ********* 2025-07-12 20:17:18.666436 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 20:17:18.666462 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 20:17:18.666481 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-07-12 20:17:18.666498 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 20:17:18.666515 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 20:17:18.666532 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-07-12 20:17:18.666549 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 20:17:18.666567 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 20:17:18.666584 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-07-12 20:17:18.666600 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 20:17:18.666615 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 20:17:18.666631 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-07-12 20:17:18.666647 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-12 20:17:18.666674 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-12 20:17:18.666694 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-07-12 20:17:18.666726 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.666744 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.666762 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.666781 | orchestrator | 2025-07-12 20:17:18.666798 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-07-12 20:17:18.666815 | orchestrator | Saturday 12 July 2025 20:15:04 +0000 (0:00:55.338) 0:01:47.998 ********* 2025-07-12 20:17:18.666826 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.666836 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.666847 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.666857 | orchestrator | 2025-07-12 20:17:18.666868 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-07-12 20:17:18.666878 | orchestrator | Saturday 12 July 2025 20:15:05 +0000 (0:00:00.278) 0:01:48.277 ********* 2025-07-12 20:17:18.666889 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.666900 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.666910 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.666949 | orchestrator | 2025-07-12 20:17:18.666961 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-07-12 20:17:18.666972 | orchestrator | Saturday 12 July 2025 20:15:06 +0000 (0:00:01.539) 0:01:49.816 ********* 2025-07-12 20:17:18.666982 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.666993 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.667003 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.667014 | orchestrator | 2025-07-12 20:17:18.667025 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-07-12 20:17:18.667035 | orchestrator | Saturday 12 July 2025 20:15:08 +0000 (0:00:01.407) 0:01:51.224 ********* 2025-07-12 20:17:18.667046 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.667057 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.667067 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.667078 | orchestrator | 2025-07-12 20:17:18.667088 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-07-12 20:17:18.667099 | orchestrator | Saturday 12 July 2025 20:15:33 +0000 (0:00:25.400) 0:02:16.625 ********* 2025-07-12 20:17:18.667110 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.667121 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.667132 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.667142 | orchestrator | 2025-07-12 20:17:18.667153 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-07-12 20:17:18.667164 | orchestrator | Saturday 12 July 2025 20:15:34 +0000 (0:00:00.877) 0:02:17.503 ********* 2025-07-12 20:17:18.667175 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.667185 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.667196 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.667206 | orchestrator | 2025-07-12 20:17:18.667228 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-07-12 20:17:18.667253 | orchestrator | Saturday 12 July 2025 20:15:35 +0000 (0:00:01.316) 0:02:18.819 ********* 2025-07-12 20:17:18.667276 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.667294 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.667312 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.667329 | orchestrator | 2025-07-12 20:17:18.667345 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-07-12 20:17:18.667362 | orchestrator | Saturday 12 July 2025 20:15:36 +0000 (0:00:00.905) 0:02:19.725 ********* 2025-07-12 20:17:18.667377 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.667395 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.667413 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.667431 | orchestrator | 2025-07-12 20:17:18.667451 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-07-12 20:17:18.667470 | orchestrator | Saturday 12 July 2025 20:15:37 +0000 (0:00:00.803) 0:02:20.528 ********* 2025-07-12 20:17:18.667504 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.667523 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.667540 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.667559 | orchestrator | 2025-07-12 20:17:18.667577 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-07-12 20:17:18.667597 | orchestrator | Saturday 12 July 2025 20:15:37 +0000 (0:00:00.414) 0:02:20.943 ********* 2025-07-12 20:17:18.667616 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.667634 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.667646 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.667656 | orchestrator | 2025-07-12 20:17:18.667667 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-07-12 20:17:18.667678 | orchestrator | Saturday 12 July 2025 20:15:39 +0000 (0:00:01.233) 0:02:22.176 ********* 2025-07-12 20:17:18.667688 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.667699 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.667709 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.667719 | orchestrator | 2025-07-12 20:17:18.667730 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-07-12 20:17:18.667741 | orchestrator | Saturday 12 July 2025 20:15:39 +0000 (0:00:00.761) 0:02:22.937 ********* 2025-07-12 20:17:18.667751 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.667762 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.667772 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.667783 | orchestrator | 2025-07-12 20:17:18.667793 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-07-12 20:17:18.667804 | orchestrator | Saturday 12 July 2025 20:15:40 +0000 (0:00:01.044) 0:02:23.982 ********* 2025-07-12 20:17:18.667814 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:18.667825 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:18.667835 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:18.667846 | orchestrator | 2025-07-12 20:17:18.667856 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-07-12 20:17:18.667867 | orchestrator | Saturday 12 July 2025 20:15:41 +0000 (0:00:00.851) 0:02:24.833 ********* 2025-07-12 20:17:18.667877 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.667888 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.667898 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.667909 | orchestrator | 2025-07-12 20:17:18.667954 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-07-12 20:17:18.667967 | orchestrator | Saturday 12 July 2025 20:15:42 +0000 (0:00:00.578) 0:02:25.412 ********* 2025-07-12 20:17:18.667978 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.667989 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.667999 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.668010 | orchestrator | 2025-07-12 20:17:18.668020 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-07-12 20:17:18.668031 | orchestrator | Saturday 12 July 2025 20:15:42 +0000 (0:00:00.289) 0:02:25.702 ********* 2025-07-12 20:17:18.668042 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.668052 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.668063 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.668073 | orchestrator | 2025-07-12 20:17:18.668084 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-07-12 20:17:18.668094 | orchestrator | Saturday 12 July 2025 20:15:43 +0000 (0:00:00.665) 0:02:26.368 ********* 2025-07-12 20:17:18.668105 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.668115 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.668126 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.668144 | orchestrator | 2025-07-12 20:17:18.668163 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-07-12 20:17:18.668181 | orchestrator | Saturday 12 July 2025 20:15:43 +0000 (0:00:00.614) 0:02:26.982 ********* 2025-07-12 20:17:18.668198 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 20:17:18.668228 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 20:17:18.668246 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 20:17:18.668264 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-07-12 20:17:18.668281 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 20:17:18.668299 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-07-12 20:17:18.668316 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-07-12 20:17:18.668333 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 20:17:18.668351 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 20:17:18.668381 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-07-12 20:17:18.668400 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 20:17:18.668416 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 20:17:18.668434 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-07-12 20:17:18.668451 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 20:17:18.668469 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 20:17:18.668487 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-07-12 20:17:18.668505 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 20:17:18.668523 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 20:17:18.668542 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-07-12 20:17:18.668559 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-07-12 20:17:18.668577 | orchestrator | 2025-07-12 20:17:18.668596 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-07-12 20:17:18.668615 | orchestrator | 2025-07-12 20:17:18.668633 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-07-12 20:17:18.668653 | orchestrator | Saturday 12 July 2025 20:15:47 +0000 (0:00:03.231) 0:02:30.213 ********* 2025-07-12 20:17:18.668671 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:17:18.668690 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:17:18.668705 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:17:18.668716 | orchestrator | 2025-07-12 20:17:18.668726 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-07-12 20:17:18.668737 | orchestrator | Saturday 12 July 2025 20:15:47 +0000 (0:00:00.334) 0:02:30.547 ********* 2025-07-12 20:17:18.668747 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:17:18.668758 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:17:18.668768 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:17:18.668778 | orchestrator | 2025-07-12 20:17:18.668789 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-07-12 20:17:18.668799 | orchestrator | Saturday 12 July 2025 20:15:48 +0000 (0:00:00.649) 0:02:31.197 ********* 2025-07-12 20:17:18.668810 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:17:18.668820 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:17:18.668831 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:17:18.668841 | orchestrator | 2025-07-12 20:17:18.668852 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-07-12 20:17:18.668884 | orchestrator | Saturday 12 July 2025 20:15:48 +0000 (0:00:00.549) 0:02:31.746 ********* 2025-07-12 20:17:18.668903 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:17:18.668968 | orchestrator | 2025-07-12 20:17:18.668983 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-07-12 20:17:18.668994 | orchestrator | Saturday 12 July 2025 20:15:49 +0000 (0:00:00.469) 0:02:32.216 ********* 2025-07-12 20:17:18.669005 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.669015 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.669026 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.669036 | orchestrator | 2025-07-12 20:17:18.669047 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-07-12 20:17:18.669057 | orchestrator | Saturday 12 July 2025 20:15:49 +0000 (0:00:00.319) 0:02:32.535 ********* 2025-07-12 20:17:18.669068 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.669078 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.669089 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.669099 | orchestrator | 2025-07-12 20:17:18.669110 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-07-12 20:17:18.669120 | orchestrator | Saturday 12 July 2025 20:15:49 +0000 (0:00:00.525) 0:02:33.061 ********* 2025-07-12 20:17:18.669131 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.669142 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.669152 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.669163 | orchestrator | 2025-07-12 20:17:18.669173 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-07-12 20:17:18.669184 | orchestrator | Saturday 12 July 2025 20:15:50 +0000 (0:00:00.307) 0:02:33.368 ********* 2025-07-12 20:17:18.669194 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:17:18.669205 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:17:18.669215 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:17:18.669226 | orchestrator | 2025-07-12 20:17:18.669236 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-07-12 20:17:18.669246 | orchestrator | Saturday 12 July 2025 20:15:50 +0000 (0:00:00.674) 0:02:34.043 ********* 2025-07-12 20:17:18.669255 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:18.669265 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:18.669274 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:18.669283 | orchestrator | 2025-07-12 20:17:18.669293 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-07-12 20:17:18.669302 | orchestrator | Saturday 12 July 2025 20:15:52 +0000 (0:00:01.497) 0:02:35.540 ********* 2025-07-12 20:17:18.669311 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:18.669320 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:18.669330 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:18.669339 | orchestrator | 2025-07-12 20:17:18.669348 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-07-12 20:17:18.669358 | orchestrator | Saturday 12 July 2025 20:15:53 +0000 (0:00:01.548) 0:02:37.088 ********* 2025-07-12 20:17:18.669367 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:18.669376 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:18.669385 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:18.669395 | orchestrator | 2025-07-12 20:17:18.669414 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-12 20:17:18.669424 | orchestrator | 2025-07-12 20:17:18.669433 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-12 20:17:18.669443 | orchestrator | Saturday 12 July 2025 20:16:06 +0000 (0:00:12.567) 0:02:49.656 ********* 2025-07-12 20:17:18.669452 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:18.669462 | orchestrator | 2025-07-12 20:17:18.669471 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-12 20:17:18.669480 | orchestrator | Saturday 12 July 2025 20:16:07 +0000 (0:00:00.781) 0:02:50.438 ********* 2025-07-12 20:17:18.669501 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:18.669511 | orchestrator | 2025-07-12 20:17:18.669521 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 20:17:18.669530 | orchestrator | Saturday 12 July 2025 20:16:07 +0000 (0:00:00.386) 0:02:50.825 ********* 2025-07-12 20:17:18.669540 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 20:17:18.669549 | orchestrator | 2025-07-12 20:17:18.669559 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 20:17:18.669568 | orchestrator | Saturday 12 July 2025 20:16:08 +0000 (0:00:00.687) 0:02:51.512 ********* 2025-07-12 20:17:18.669577 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:18.669587 | orchestrator | 2025-07-12 20:17:18.669596 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-12 20:17:18.669606 | orchestrator | Saturday 12 July 2025 20:16:09 +0000 (0:00:00.742) 0:02:52.254 ********* 2025-07-12 20:17:18.669615 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:18.669625 | orchestrator | 2025-07-12 20:17:18.669634 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-12 20:17:18.669643 | orchestrator | Saturday 12 July 2025 20:16:09 +0000 (0:00:00.541) 0:02:52.796 ********* 2025-07-12 20:17:18.669653 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 20:17:18.669662 | orchestrator | 2025-07-12 20:17:18.669672 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-12 20:17:18.669681 | orchestrator | Saturday 12 July 2025 20:16:11 +0000 (0:00:01.548) 0:02:54.345 ********* 2025-07-12 20:17:18.669691 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 20:17:18.669700 | orchestrator | 2025-07-12 20:17:18.669709 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-12 20:17:18.669719 | orchestrator | Saturday 12 July 2025 20:16:11 +0000 (0:00:00.782) 0:02:55.127 ********* 2025-07-12 20:17:18.669728 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:18.669738 | orchestrator | 2025-07-12 20:17:18.669747 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-12 20:17:18.669757 | orchestrator | Saturday 12 July 2025 20:16:12 +0000 (0:00:00.365) 0:02:55.493 ********* 2025-07-12 20:17:18.669766 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:18.669775 | orchestrator | 2025-07-12 20:17:18.669785 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-07-12 20:17:18.669794 | orchestrator | 2025-07-12 20:17:18.669804 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-07-12 20:17:18.669813 | orchestrator | Saturday 12 July 2025 20:16:12 +0000 (0:00:00.412) 0:02:55.906 ********* 2025-07-12 20:17:18.669822 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:18.669832 | orchestrator | 2025-07-12 20:17:18.669841 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-07-12 20:17:18.669851 | orchestrator | Saturday 12 July 2025 20:16:12 +0000 (0:00:00.131) 0:02:56.037 ********* 2025-07-12 20:17:18.669860 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 20:17:18.669870 | orchestrator | 2025-07-12 20:17:18.669879 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-07-12 20:17:18.669888 | orchestrator | Saturday 12 July 2025 20:16:13 +0000 (0:00:00.192) 0:02:56.230 ********* 2025-07-12 20:17:18.669898 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:18.669907 | orchestrator | 2025-07-12 20:17:18.669937 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-07-12 20:17:18.669956 | orchestrator | Saturday 12 July 2025 20:16:14 +0000 (0:00:01.004) 0:02:57.234 ********* 2025-07-12 20:17:18.669973 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:18.669990 | orchestrator | 2025-07-12 20:17:18.670004 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-07-12 20:17:18.670014 | orchestrator | Saturday 12 July 2025 20:16:15 +0000 (0:00:01.688) 0:02:58.923 ********* 2025-07-12 20:17:18.670057 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:18.670080 | orchestrator | 2025-07-12 20:17:18.670090 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-07-12 20:17:18.670099 | orchestrator | Saturday 12 July 2025 20:16:16 +0000 (0:00:00.856) 0:02:59.779 ********* 2025-07-12 20:17:18.670109 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:18.670118 | orchestrator | 2025-07-12 20:17:18.670128 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-07-12 20:17:18.670137 | orchestrator | Saturday 12 July 2025 20:16:17 +0000 (0:00:00.442) 0:03:00.222 ********* 2025-07-12 20:17:18.670146 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:18.670156 | orchestrator | 2025-07-12 20:17:18.670165 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-07-12 20:17:18.670174 | orchestrator | Saturday 12 July 2025 20:16:23 +0000 (0:00:06.487) 0:03:06.710 ********* 2025-07-12 20:17:18.670183 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:18.670193 | orchestrator | 2025-07-12 20:17:18.670202 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-07-12 20:17:18.670211 | orchestrator | Saturday 12 July 2025 20:16:37 +0000 (0:00:13.457) 0:03:20.167 ********* 2025-07-12 20:17:18.670221 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:18.670230 | orchestrator | 2025-07-12 20:17:18.670239 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-07-12 20:17:18.670249 | orchestrator | 2025-07-12 20:17:18.670258 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-07-12 20:17:18.670276 | orchestrator | Saturday 12 July 2025 20:16:37 +0000 (0:00:00.667) 0:03:20.835 ********* 2025-07-12 20:17:18.670286 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.670296 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.670305 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.670314 | orchestrator | 2025-07-12 20:17:18.670324 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-07-12 20:17:18.670333 | orchestrator | Saturday 12 July 2025 20:16:38 +0000 (0:00:00.410) 0:03:21.245 ********* 2025-07-12 20:17:18.670343 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670352 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.670362 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.670371 | orchestrator | 2025-07-12 20:17:18.670381 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-07-12 20:17:18.670390 | orchestrator | Saturday 12 July 2025 20:16:38 +0000 (0:00:00.620) 0:03:21.866 ********* 2025-07-12 20:17:18.670399 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:17:18.670409 | orchestrator | 2025-07-12 20:17:18.670418 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-07-12 20:17:18.670427 | orchestrator | Saturday 12 July 2025 20:16:39 +0000 (0:00:01.054) 0:03:22.920 ********* 2025-07-12 20:17:18.670437 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670446 | orchestrator | 2025-07-12 20:17:18.670455 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-07-12 20:17:18.670465 | orchestrator | Saturday 12 July 2025 20:16:40 +0000 (0:00:00.435) 0:03:23.356 ********* 2025-07-12 20:17:18.670474 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670483 | orchestrator | 2025-07-12 20:17:18.670493 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-07-12 20:17:18.670502 | orchestrator | Saturday 12 July 2025 20:16:40 +0000 (0:00:00.347) 0:03:23.703 ********* 2025-07-12 20:17:18.670512 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670521 | orchestrator | 2025-07-12 20:17:18.670530 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-07-12 20:17:18.670540 | orchestrator | Saturday 12 July 2025 20:16:40 +0000 (0:00:00.235) 0:03:23.939 ********* 2025-07-12 20:17:18.670558 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670568 | orchestrator | 2025-07-12 20:17:18.670577 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-07-12 20:17:18.670597 | orchestrator | Saturday 12 July 2025 20:16:41 +0000 (0:00:00.843) 0:03:24.782 ********* 2025-07-12 20:17:18.670614 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670629 | orchestrator | 2025-07-12 20:17:18.670646 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-07-12 20:17:18.670661 | orchestrator | Saturday 12 July 2025 20:16:41 +0000 (0:00:00.249) 0:03:25.032 ********* 2025-07-12 20:17:18.670677 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670691 | orchestrator | 2025-07-12 20:17:18.670708 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-07-12 20:17:18.670725 | orchestrator | Saturday 12 July 2025 20:16:42 +0000 (0:00:00.261) 0:03:25.294 ********* 2025-07-12 20:17:18.670741 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670758 | orchestrator | 2025-07-12 20:17:18.670783 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-07-12 20:17:18.670800 | orchestrator | Saturday 12 July 2025 20:16:42 +0000 (0:00:00.277) 0:03:25.572 ********* 2025-07-12 20:17:18.670812 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670821 | orchestrator | 2025-07-12 20:17:18.670831 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-07-12 20:17:18.670840 | orchestrator | Saturday 12 July 2025 20:16:42 +0000 (0:00:00.291) 0:03:25.864 ********* 2025-07-12 20:17:18.670849 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670858 | orchestrator | 2025-07-12 20:17:18.670868 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-07-12 20:17:18.670877 | orchestrator | Saturday 12 July 2025 20:16:42 +0000 (0:00:00.247) 0:03:26.111 ********* 2025-07-12 20:17:18.670886 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-07-12 20:17:18.670896 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-07-12 20:17:18.670906 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.670964 | orchestrator | 2025-07-12 20:17:18.670977 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-07-12 20:17:18.670987 | orchestrator | Saturday 12 July 2025 20:16:43 +0000 (0:00:00.438) 0:03:26.549 ********* 2025-07-12 20:17:18.670996 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671005 | orchestrator | 2025-07-12 20:17:18.671015 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-07-12 20:17:18.671025 | orchestrator | Saturday 12 July 2025 20:16:43 +0000 (0:00:00.222) 0:03:26.772 ********* 2025-07-12 20:17:18.671034 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671044 | orchestrator | 2025-07-12 20:17:18.671053 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-07-12 20:17:18.671062 | orchestrator | Saturday 12 July 2025 20:16:43 +0000 (0:00:00.232) 0:03:27.004 ********* 2025-07-12 20:17:18.671072 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671081 | orchestrator | 2025-07-12 20:17:18.671091 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-07-12 20:17:18.671100 | orchestrator | Saturday 12 July 2025 20:16:44 +0000 (0:00:00.239) 0:03:27.244 ********* 2025-07-12 20:17:18.671110 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671119 | orchestrator | 2025-07-12 20:17:18.671128 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-07-12 20:17:18.671138 | orchestrator | Saturday 12 July 2025 20:16:44 +0000 (0:00:00.221) 0:03:27.466 ********* 2025-07-12 20:17:18.671147 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671156 | orchestrator | 2025-07-12 20:17:18.671166 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-07-12 20:17:18.671175 | orchestrator | Saturday 12 July 2025 20:16:44 +0000 (0:00:00.298) 0:03:27.764 ********* 2025-07-12 20:17:18.671185 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671194 | orchestrator | 2025-07-12 20:17:18.671204 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-07-12 20:17:18.671221 | orchestrator | Saturday 12 July 2025 20:16:45 +0000 (0:00:01.135) 0:03:28.899 ********* 2025-07-12 20:17:18.671241 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671250 | orchestrator | 2025-07-12 20:17:18.671259 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-07-12 20:17:18.671269 | orchestrator | Saturday 12 July 2025 20:16:46 +0000 (0:00:00.235) 0:03:29.135 ********* 2025-07-12 20:17:18.671278 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671287 | orchestrator | 2025-07-12 20:17:18.671297 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-07-12 20:17:18.671306 | orchestrator | Saturday 12 July 2025 20:16:46 +0000 (0:00:00.231) 0:03:29.366 ********* 2025-07-12 20:17:18.671315 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671324 | orchestrator | 2025-07-12 20:17:18.671334 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-07-12 20:17:18.671343 | orchestrator | Saturday 12 July 2025 20:16:46 +0000 (0:00:00.275) 0:03:29.641 ********* 2025-07-12 20:17:18.671353 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671362 | orchestrator | 2025-07-12 20:17:18.671372 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-07-12 20:17:18.671381 | orchestrator | Saturday 12 July 2025 20:16:46 +0000 (0:00:00.229) 0:03:29.871 ********* 2025-07-12 20:17:18.671389 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671397 | orchestrator | 2025-07-12 20:17:18.671404 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-07-12 20:17:18.671412 | orchestrator | Saturday 12 July 2025 20:16:46 +0000 (0:00:00.216) 0:03:30.087 ********* 2025-07-12 20:17:18.671420 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-07-12 20:17:18.671427 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-07-12 20:17:18.671435 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-07-12 20:17:18.671443 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-07-12 20:17:18.671451 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671458 | orchestrator | 2025-07-12 20:17:18.671466 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-07-12 20:17:18.671473 | orchestrator | Saturday 12 July 2025 20:16:47 +0000 (0:00:00.446) 0:03:30.534 ********* 2025-07-12 20:17:18.671481 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671489 | orchestrator | 2025-07-12 20:17:18.671496 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-07-12 20:17:18.671504 | orchestrator | Saturday 12 July 2025 20:16:47 +0000 (0:00:00.193) 0:03:30.728 ********* 2025-07-12 20:17:18.671512 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671519 | orchestrator | 2025-07-12 20:17:18.671527 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-07-12 20:17:18.671535 | orchestrator | Saturday 12 July 2025 20:16:47 +0000 (0:00:00.194) 0:03:30.922 ********* 2025-07-12 20:17:18.671542 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671550 | orchestrator | 2025-07-12 20:17:18.671557 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-07-12 20:17:18.671570 | orchestrator | Saturday 12 July 2025 20:16:47 +0000 (0:00:00.187) 0:03:31.109 ********* 2025-07-12 20:17:18.671578 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671585 | orchestrator | 2025-07-12 20:17:18.671593 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-07-12 20:17:18.671601 | orchestrator | Saturday 12 July 2025 20:16:48 +0000 (0:00:00.186) 0:03:31.296 ********* 2025-07-12 20:17:18.671608 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-07-12 20:17:18.671616 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-07-12 20:17:18.671624 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671631 | orchestrator | 2025-07-12 20:17:18.671639 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-07-12 20:17:18.671647 | orchestrator | Saturday 12 July 2025 20:16:48 +0000 (0:00:00.443) 0:03:31.739 ********* 2025-07-12 20:17:18.671660 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.671667 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.671675 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.671682 | orchestrator | 2025-07-12 20:17:18.671690 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-07-12 20:17:18.671698 | orchestrator | Saturday 12 July 2025 20:16:49 +0000 (0:00:00.460) 0:03:32.199 ********* 2025-07-12 20:17:18.671705 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.671713 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.671720 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.671728 | orchestrator | 2025-07-12 20:17:18.671736 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-07-12 20:17:18.671743 | orchestrator | 2025-07-12 20:17:18.671751 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-07-12 20:17:18.671759 | orchestrator | Saturday 12 July 2025 20:16:49 +0000 (0:00:00.832) 0:03:33.031 ********* 2025-07-12 20:17:18.671771 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:18.671785 | orchestrator | 2025-07-12 20:17:18.671799 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-07-12 20:17:18.671812 | orchestrator | Saturday 12 July 2025 20:16:50 +0000 (0:00:00.140) 0:03:33.172 ********* 2025-07-12 20:17:18.671826 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-07-12 20:17:18.671839 | orchestrator | 2025-07-12 20:17:18.671852 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-07-12 20:17:18.671866 | orchestrator | Saturday 12 July 2025 20:16:50 +0000 (0:00:00.191) 0:03:33.364 ********* 2025-07-12 20:17:18.671880 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:18.671895 | orchestrator | 2025-07-12 20:17:18.671909 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-07-12 20:17:18.671938 | orchestrator | 2025-07-12 20:17:18.671947 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-07-12 20:17:18.671962 | orchestrator | Saturday 12 July 2025 20:16:55 +0000 (0:00:05.673) 0:03:39.037 ********* 2025-07-12 20:17:18.671976 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:17:18.671989 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:17:18.672002 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:17:18.672015 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:18.672029 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:18.672043 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:18.672057 | orchestrator | 2025-07-12 20:17:18.672070 | orchestrator | TASK [Manage labels] *********************************************************** 2025-07-12 20:17:18.672082 | orchestrator | Saturday 12 July 2025 20:16:56 +0000 (0:00:00.666) 0:03:39.704 ********* 2025-07-12 20:17:18.672090 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 20:17:18.672098 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 20:17:18.672106 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 20:17:18.672113 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-07-12 20:17:18.672121 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 20:17:18.672128 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-07-12 20:17:18.672136 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 20:17:18.672144 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 20:17:18.672152 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 20:17:18.672159 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-07-12 20:17:18.672167 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 20:17:18.672182 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-07-12 20:17:18.672190 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 20:17:18.672197 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 20:17:18.672205 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 20:17:18.672212 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-07-12 20:17:18.672220 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 20:17:18.672228 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-07-12 20:17:18.672236 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 20:17:18.672249 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 20:17:18.672257 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-07-12 20:17:18.672264 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 20:17:18.672272 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 20:17:18.672279 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-07-12 20:17:18.672287 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 20:17:18.672294 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 20:17:18.672302 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-07-12 20:17:18.672310 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 20:17:18.672317 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 20:17:18.672325 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-07-12 20:17:18.672332 | orchestrator | 2025-07-12 20:17:18.672340 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-07-12 20:17:18.672348 | orchestrator | Saturday 12 July 2025 20:17:14 +0000 (0:00:17.726) 0:03:57.430 ********* 2025-07-12 20:17:18.672355 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.672363 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.672371 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.672378 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.672386 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.672393 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.672401 | orchestrator | 2025-07-12 20:17:18.672408 | orchestrator | TASK [Manage taints] *********************************************************** 2025-07-12 20:17:18.672416 | orchestrator | Saturday 12 July 2025 20:17:14 +0000 (0:00:00.665) 0:03:58.096 ********* 2025-07-12 20:17:18.672424 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:18.672431 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:18.672439 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:18.672446 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:18.672454 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:18.672461 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:18.672469 | orchestrator | 2025-07-12 20:17:18.672481 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:17:18.672503 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:17:18.672519 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-07-12 20:17:18.672539 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 20:17:18.672548 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 20:17:18.672555 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 20:17:18.672563 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 20:17:18.672571 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 20:17:18.672578 | orchestrator | 2025-07-12 20:17:18.672586 | orchestrator | 2025-07-12 20:17:18.672594 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:17:18.672602 | orchestrator | Saturday 12 July 2025 20:17:15 +0000 (0:00:00.608) 0:03:58.705 ********* 2025-07-12 20:17:18.672609 | orchestrator | =============================================================================== 2025-07-12 20:17:18.672617 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 55.34s 2025-07-12 20:17:18.672625 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 25.40s 2025-07-12 20:17:18.672632 | orchestrator | Manage labels ---------------------------------------------------------- 17.73s 2025-07-12 20:17:18.672640 | orchestrator | kubectl : Install required packages ------------------------------------ 13.46s 2025-07-12 20:17:18.672648 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 12.57s 2025-07-12 20:17:18.672655 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.49s 2025-07-12 20:17:18.672663 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.79s 2025-07-12 20:17:18.672670 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.67s 2025-07-12 20:17:18.672678 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.23s 2025-07-12 20:17:18.672686 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.85s 2025-07-12 20:17:18.672698 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.81s 2025-07-12 20:17:18.672706 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.62s 2025-07-12 20:17:18.672713 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 2.43s 2025-07-12 20:17:18.672721 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 2.38s 2025-07-12 20:17:18.672728 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 2.34s 2025-07-12 20:17:18.672736 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 2.33s 2025-07-12 20:17:18.672743 | orchestrator | k3s_prereq : Add /usr/local/bin to sudo secure_path --------------------- 2.12s 2025-07-12 20:17:18.672751 | orchestrator | k3s_server : Create custom resolv.conf for k3s -------------------------- 1.79s 2025-07-12 20:17:18.672759 | orchestrator | k3s_server : Stop k3s --------------------------------------------------- 1.77s 2025-07-12 20:17:18.672766 | orchestrator | k3s_prereq : Enable IPv6 forwarding ------------------------------------- 1.75s 2025-07-12 20:17:18.672774 | orchestrator | 2025-07-12 20:17:18 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:18.672782 | orchestrator | 2025-07-12 20:17:18 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:18.672790 | orchestrator | 2025-07-12 20:17:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:21.707108 | orchestrator | 2025-07-12 20:17:21 | INFO  | Task a142c0b5-9cc0-4a56-a9f9-0e23fc8a46e7 is in state STARTED 2025-07-12 20:17:21.707414 | orchestrator | 2025-07-12 20:17:21 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:21.708093 | orchestrator | 2025-07-12 20:17:21 | INFO  | Task 388bdfd7-cfb2-4eca-85ad-193f251291ce is in state STARTED 2025-07-12 20:17:21.709620 | orchestrator | 2025-07-12 20:17:21 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:21.710403 | orchestrator | 2025-07-12 20:17:21 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:21.711170 | orchestrator | 2025-07-12 20:17:21 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:21.711922 | orchestrator | 2025-07-12 20:17:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:24.748836 | orchestrator | 2025-07-12 20:17:24 | INFO  | Task a142c0b5-9cc0-4a56-a9f9-0e23fc8a46e7 is in state SUCCESS 2025-07-12 20:17:24.749120 | orchestrator | 2025-07-12 20:17:24 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:24.749873 | orchestrator | 2025-07-12 20:17:24 | INFO  | Task 388bdfd7-cfb2-4eca-85ad-193f251291ce is in state STARTED 2025-07-12 20:17:24.752168 | orchestrator | 2025-07-12 20:17:24 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:24.752807 | orchestrator | 2025-07-12 20:17:24 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:24.758189 | orchestrator | 2025-07-12 20:17:24 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:24.760689 | orchestrator | 2025-07-12 20:17:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:27.794711 | orchestrator | 2025-07-12 20:17:27 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:27.795169 | orchestrator | 2025-07-12 20:17:27 | INFO  | Task 388bdfd7-cfb2-4eca-85ad-193f251291ce is in state SUCCESS 2025-07-12 20:17:27.795686 | orchestrator | 2025-07-12 20:17:27 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:27.796363 | orchestrator | 2025-07-12 20:17:27 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:27.797298 | orchestrator | 2025-07-12 20:17:27 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:27.797327 | orchestrator | 2025-07-12 20:17:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:30.840144 | orchestrator | 2025-07-12 20:17:30 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:30.840919 | orchestrator | 2025-07-12 20:17:30 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:30.841112 | orchestrator | 2025-07-12 20:17:30 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:30.841762 | orchestrator | 2025-07-12 20:17:30 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:30.841804 | orchestrator | 2025-07-12 20:17:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:33.877865 | orchestrator | 2025-07-12 20:17:33 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state STARTED 2025-07-12 20:17:33.880126 | orchestrator | 2025-07-12 20:17:33 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:33.883095 | orchestrator | 2025-07-12 20:17:33 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:33.885612 | orchestrator | 2025-07-12 20:17:33 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:33.885674 | orchestrator | 2025-07-12 20:17:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:36.930366 | orchestrator | 2025-07-12 20:17:36 | INFO  | Task 9e389935-2696-4f50-a906-496c6fadd127 is in state SUCCESS 2025-07-12 20:17:36.931504 | orchestrator | 2025-07-12 20:17:36.931546 | orchestrator | 2025-07-12 20:17:36.931559 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-07-12 20:17:36.931571 | orchestrator | 2025-07-12 20:17:36.931582 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 20:17:36.931593 | orchestrator | Saturday 12 July 2025 20:17:20 +0000 (0:00:00.175) 0:00:00.175 ********* 2025-07-12 20:17:36.931604 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 20:17:36.931615 | orchestrator | 2025-07-12 20:17:36.931626 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 20:17:36.931636 | orchestrator | Saturday 12 July 2025 20:17:20 +0000 (0:00:00.804) 0:00:00.979 ********* 2025-07-12 20:17:36.931647 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:36.931658 | orchestrator | 2025-07-12 20:17:36.931668 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-07-12 20:17:36.931679 | orchestrator | Saturday 12 July 2025 20:17:22 +0000 (0:00:01.259) 0:00:02.239 ********* 2025-07-12 20:17:36.931690 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:36.931700 | orchestrator | 2025-07-12 20:17:36.931711 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:17:36.931722 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:17:36.931757 | orchestrator | 2025-07-12 20:17:36.931769 | orchestrator | 2025-07-12 20:17:36.931779 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:17:36.931790 | orchestrator | Saturday 12 July 2025 20:17:22 +0000 (0:00:00.624) 0:00:02.863 ********* 2025-07-12 20:17:36.931800 | orchestrator | =============================================================================== 2025-07-12 20:17:36.931810 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.26s 2025-07-12 20:17:36.931821 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.80s 2025-07-12 20:17:36.931831 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.62s 2025-07-12 20:17:36.931841 | orchestrator | 2025-07-12 20:17:36.931852 | orchestrator | 2025-07-12 20:17:36.931863 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-07-12 20:17:36.931874 | orchestrator | 2025-07-12 20:17:36.931884 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-07-12 20:17:36.931925 | orchestrator | Saturday 12 July 2025 20:17:20 +0000 (0:00:00.217) 0:00:00.217 ********* 2025-07-12 20:17:36.931936 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:36.931947 | orchestrator | 2025-07-12 20:17:36.931958 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-07-12 20:17:36.931968 | orchestrator | Saturday 12 July 2025 20:17:20 +0000 (0:00:00.581) 0:00:00.798 ********* 2025-07-12 20:17:36.931979 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:36.931989 | orchestrator | 2025-07-12 20:17:36.932000 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-07-12 20:17:36.932011 | orchestrator | Saturday 12 July 2025 20:17:21 +0000 (0:00:00.573) 0:00:01.372 ********* 2025-07-12 20:17:36.932021 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-07-12 20:17:36.932031 | orchestrator | 2025-07-12 20:17:36.932042 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-07-12 20:17:36.932052 | orchestrator | Saturday 12 July 2025 20:17:22 +0000 (0:00:00.740) 0:00:02.112 ********* 2025-07-12 20:17:36.932062 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:36.932096 | orchestrator | 2025-07-12 20:17:36.932109 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-07-12 20:17:36.932121 | orchestrator | Saturday 12 July 2025 20:17:23 +0000 (0:00:01.252) 0:00:03.365 ********* 2025-07-12 20:17:36.932134 | orchestrator | changed: [testbed-manager] 2025-07-12 20:17:36.932146 | orchestrator | 2025-07-12 20:17:36.932158 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-07-12 20:17:36.932170 | orchestrator | Saturday 12 July 2025 20:17:23 +0000 (0:00:00.509) 0:00:03.875 ********* 2025-07-12 20:17:36.932182 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 20:17:36.932195 | orchestrator | 2025-07-12 20:17:36.932208 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-07-12 20:17:36.932227 | orchestrator | Saturday 12 July 2025 20:17:25 +0000 (0:00:01.706) 0:00:05.581 ********* 2025-07-12 20:17:36.932245 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 20:17:36.932265 | orchestrator | 2025-07-12 20:17:36.932286 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-07-12 20:17:36.932307 | orchestrator | Saturday 12 July 2025 20:17:26 +0000 (0:00:01.301) 0:00:06.883 ********* 2025-07-12 20:17:36.932328 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:36.932341 | orchestrator | 2025-07-12 20:17:36.932354 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-07-12 20:17:36.932366 | orchestrator | Saturday 12 July 2025 20:17:27 +0000 (0:00:00.359) 0:00:07.242 ********* 2025-07-12 20:17:36.932378 | orchestrator | ok: [testbed-manager] 2025-07-12 20:17:36.932390 | orchestrator | 2025-07-12 20:17:36.932402 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:17:36.932429 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:17:36.932442 | orchestrator | 2025-07-12 20:17:36.932453 | orchestrator | 2025-07-12 20:17:36.932465 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:17:36.932476 | orchestrator | Saturday 12 July 2025 20:17:27 +0000 (0:00:00.298) 0:00:07.540 ********* 2025-07-12 20:17:36.932486 | orchestrator | =============================================================================== 2025-07-12 20:17:36.932497 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.71s 2025-07-12 20:17:36.932508 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.30s 2025-07-12 20:17:36.932518 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.25s 2025-07-12 20:17:36.932549 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.74s 2025-07-12 20:17:36.932570 | orchestrator | Get home directory of operator user ------------------------------------- 0.58s 2025-07-12 20:17:36.932582 | orchestrator | Create .kube directory -------------------------------------------------- 0.57s 2025-07-12 20:17:36.932593 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.51s 2025-07-12 20:17:36.932603 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.36s 2025-07-12 20:17:36.932614 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.30s 2025-07-12 20:17:36.932625 | orchestrator | 2025-07-12 20:17:36.932635 | orchestrator | 2025-07-12 20:17:36.932646 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:17:36.932657 | orchestrator | 2025-07-12 20:17:36.932668 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:17:36.932678 | orchestrator | Saturday 12 July 2025 20:16:14 +0000 (0:00:00.453) 0:00:00.453 ********* 2025-07-12 20:17:36.932689 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:17:36.932700 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:17:36.932710 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:17:36.932722 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:36.932741 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:36.932759 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:36.932777 | orchestrator | 2025-07-12 20:17:36.932809 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:17:36.932824 | orchestrator | Saturday 12 July 2025 20:16:16 +0000 (0:00:01.249) 0:00:01.703 ********* 2025-07-12 20:17:36.932834 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 20:17:36.932845 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 20:17:36.932856 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 20:17:36.932867 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 20:17:36.932877 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 20:17:36.932920 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-07-12 20:17:36.932933 | orchestrator | 2025-07-12 20:17:36.932944 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-07-12 20:17:36.932954 | orchestrator | 2025-07-12 20:17:36.932965 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-07-12 20:17:36.932976 | orchestrator | Saturday 12 July 2025 20:16:16 +0000 (0:00:00.745) 0:00:02.449 ********* 2025-07-12 20:17:36.932987 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:17:36.932999 | orchestrator | 2025-07-12 20:17:36.933010 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 20:17:36.933020 | orchestrator | Saturday 12 July 2025 20:16:18 +0000 (0:00:01.907) 0:00:04.356 ********* 2025-07-12 20:17:36.933031 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-12 20:17:36.933042 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-12 20:17:36.933052 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-12 20:17:36.933063 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-12 20:17:36.933074 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-12 20:17:36.933084 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-12 20:17:36.933095 | orchestrator | 2025-07-12 20:17:36.933105 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 20:17:36.933116 | orchestrator | Saturday 12 July 2025 20:16:20 +0000 (0:00:01.862) 0:00:06.219 ********* 2025-07-12 20:17:36.933127 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-07-12 20:17:36.933138 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-07-12 20:17:36.933148 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-07-12 20:17:36.933158 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-07-12 20:17:36.933169 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-07-12 20:17:36.933179 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-07-12 20:17:36.933190 | orchestrator | 2025-07-12 20:17:36.933200 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 20:17:36.933211 | orchestrator | Saturday 12 July 2025 20:16:23 +0000 (0:00:03.282) 0:00:09.501 ********* 2025-07-12 20:17:36.933221 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-07-12 20:17:36.933232 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:36.933242 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-07-12 20:17:36.933258 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:36.933278 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-07-12 20:17:36.933307 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:36.933329 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-07-12 20:17:36.933343 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:36.933354 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-07-12 20:17:36.933364 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:36.933414 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-07-12 20:17:36.933426 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:36.933436 | orchestrator | 2025-07-12 20:17:36.933447 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-07-12 20:17:36.933458 | orchestrator | Saturday 12 July 2025 20:16:25 +0000 (0:00:01.730) 0:00:11.232 ********* 2025-07-12 20:17:36.933469 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:36.933479 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:36.933490 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:36.933511 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:36.933522 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:36.933533 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:36.933543 | orchestrator | 2025-07-12 20:17:36.933554 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-07-12 20:17:36.933565 | orchestrator | Saturday 12 July 2025 20:16:26 +0000 (0:00:01.215) 0:00:12.447 ********* 2025-07-12 20:17:36.933578 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933595 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933607 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933643 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933663 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933675 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933687 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933725 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933737 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933760 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933791 | orchestrator | 2025-07-12 20:17:36.933802 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-07-12 20:17:36.933813 | orchestrator | Saturday 12 July 2025 20:16:28 +0000 (0:00:01.959) 0:00:14.407 ********* 2025-07-12 20:17:36.933825 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933849 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933969 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.933992 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934108 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934131 | orchestrator | 2025-07-12 20:17:36.934142 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-07-12 20:17:36.934153 | orchestrator | Saturday 12 July 2025 20:16:33 +0000 (0:00:04.169) 0:00:18.577 ********* 2025-07-12 20:17:36.934164 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:36.934175 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:36.934185 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:36.934196 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:17:36.934206 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:17:36.934217 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:17:36.934227 | orchestrator | 2025-07-12 20:17:36.934238 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-07-12 20:17:36.934248 | orchestrator | Saturday 12 July 2025 20:16:34 +0000 (0:00:01.952) 0:00:20.530 ********* 2025-07-12 20:17:36.934258 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934275 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934342 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.934982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.935005 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.935025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.935040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.935050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-07-12 20:17:36.935060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-07-12 20:17:36.935079 | orchestrator | 2025-07-12 20:17:36.935089 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 20:17:36.935099 | orchestrator | Saturday 12 July 2025 20:16:40 +0000 (0:00:05.522) 0:00:26.052 ********* 2025-07-12 20:17:36.935109 | orchestrator | 2025-07-12 20:17:36.935118 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 20:17:36.935128 | orchestrator | Saturday 12 July 2025 20:16:40 +0000 (0:00:00.159) 0:00:26.211 ********* 2025-07-12 20:17:36.935138 | orchestrator | 2025-07-12 20:17:36.935147 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 20:17:36.935157 | orchestrator | Saturday 12 July 2025 20:16:40 +0000 (0:00:00.151) 0:00:26.362 ********* 2025-07-12 20:17:36.935166 | orchestrator | 2025-07-12 20:17:36.935175 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 20:17:36.935185 | orchestrator | Saturday 12 July 2025 20:16:41 +0000 (0:00:00.249) 0:00:26.612 ********* 2025-07-12 20:17:36.935194 | orchestrator | 2025-07-12 20:17:36.935204 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 20:17:36.935213 | orchestrator | Saturday 12 July 2025 20:16:41 +0000 (0:00:00.261) 0:00:26.874 ********* 2025-07-12 20:17:36.935223 | orchestrator | 2025-07-12 20:17:36.935232 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-07-12 20:17:36.935242 | orchestrator | Saturday 12 July 2025 20:16:41 +0000 (0:00:00.311) 0:00:27.185 ********* 2025-07-12 20:17:36.935251 | orchestrator | 2025-07-12 20:17:36.935260 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-07-12 20:17:36.935270 | orchestrator | Saturday 12 July 2025 20:16:42 +0000 (0:00:00.746) 0:00:27.932 ********* 2025-07-12 20:17:36.935279 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:36.935289 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:36.935299 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:36.935309 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:36.935318 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:36.935327 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:36.935339 | orchestrator | 2025-07-12 20:17:36.935356 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-07-12 20:17:36.935372 | orchestrator | Saturday 12 July 2025 20:16:54 +0000 (0:00:12.268) 0:00:40.200 ********* 2025-07-12 20:17:36.935389 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:17:36.935407 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:17:36.935425 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:17:36.935441 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:17:36.935457 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:17:36.935467 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:17:36.935476 | orchestrator | 2025-07-12 20:17:36.935486 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-12 20:17:36.935533 | orchestrator | Saturday 12 July 2025 20:16:57 +0000 (0:00:02.381) 0:00:42.582 ********* 2025-07-12 20:17:36.935544 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:36.935553 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:36.935563 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:36.935572 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:36.935581 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:36.935591 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:36.935600 | orchestrator | 2025-07-12 20:17:36.935609 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-07-12 20:17:36.935619 | orchestrator | Saturday 12 July 2025 20:17:08 +0000 (0:00:11.490) 0:00:54.072 ********* 2025-07-12 20:17:36.935636 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-07-12 20:17:36.935646 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-07-12 20:17:36.935656 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-07-12 20:17:36.935671 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-07-12 20:17:36.935680 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-07-12 20:17:36.935690 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-07-12 20:17:36.935699 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-07-12 20:17:36.935708 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-07-12 20:17:36.935718 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-07-12 20:17:36.935727 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-07-12 20:17:36.935736 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-07-12 20:17:36.935746 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 20:17:36.935755 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 20:17:36.935764 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 20:17:36.935774 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-07-12 20:17:36.935783 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 20:17:36.935792 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 20:17:36.935802 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-07-12 20:17:36.935811 | orchestrator | 2025-07-12 20:17:36.935821 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-07-12 20:17:36.935830 | orchestrator | Saturday 12 July 2025 20:17:19 +0000 (0:00:11.344) 0:01:05.417 ********* 2025-07-12 20:17:36.935840 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-07-12 20:17:36.935849 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:36.935858 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-07-12 20:17:36.935868 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:36.935877 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-07-12 20:17:36.935908 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:36.935920 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-07-12 20:17:36.935930 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-07-12 20:17:36.935939 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-07-12 20:17:36.935948 | orchestrator | 2025-07-12 20:17:36.935958 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-07-12 20:17:36.935967 | orchestrator | Saturday 12 July 2025 20:17:23 +0000 (0:00:03.355) 0:01:08.773 ********* 2025-07-12 20:17:36.935977 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-07-12 20:17:36.935997 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:17:36.936007 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-07-12 20:17:36.936024 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:17:36.936033 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-07-12 20:17:36.936052 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:17:36.936062 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-07-12 20:17:36.936072 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-07-12 20:17:36.936081 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-07-12 20:17:36.936091 | orchestrator | 2025-07-12 20:17:36.936100 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-07-12 20:17:36.936110 | orchestrator | Saturday 12 July 2025 20:17:27 +0000 (0:00:03.892) 0:01:12.665 ********* 2025-07-12 20:17:36.936119 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:17:36.936129 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:17:36.936145 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:17:36.936155 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:17:36.936164 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:17:36.936174 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:17:36.936183 | orchestrator | 2025-07-12 20:17:36.936193 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:17:36.936203 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:17:36.936213 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:17:36.936223 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:17:36.936237 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:17:36.936247 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:17:36.936257 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:17:36.936266 | orchestrator | 2025-07-12 20:17:36.936276 | orchestrator | 2025-07-12 20:17:36.936286 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:17:36.936295 | orchestrator | Saturday 12 July 2025 20:17:34 +0000 (0:00:07.728) 0:01:20.394 ********* 2025-07-12 20:17:36.936305 | orchestrator | =============================================================================== 2025-07-12 20:17:36.936314 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.22s 2025-07-12 20:17:36.936324 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.27s 2025-07-12 20:17:36.936333 | orchestrator | openvswitch : Set system-id, hostname and hw-offload ------------------- 11.34s 2025-07-12 20:17:36.936343 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 5.52s 2025-07-12 20:17:36.936352 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.17s 2025-07-12 20:17:36.936362 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.89s 2025-07-12 20:17:36.936371 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.36s 2025-07-12 20:17:36.936385 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 3.28s 2025-07-12 20:17:36.936403 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.38s 2025-07-12 20:17:36.936421 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.96s 2025-07-12 20:17:36.936440 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.95s 2025-07-12 20:17:36.936450 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.91s 2025-07-12 20:17:36.936467 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.88s 2025-07-12 20:17:36.936476 | orchestrator | module-load : Load modules ---------------------------------------------- 1.86s 2025-07-12 20:17:36.936486 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.73s 2025-07-12 20:17:36.936496 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.25s 2025-07-12 20:17:36.936505 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.22s 2025-07-12 20:17:36.936514 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.75s 2025-07-12 20:17:36.936524 | orchestrator | 2025-07-12 20:17:36 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:36.936647 | orchestrator | 2025-07-12 20:17:36 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:36.937489 | orchestrator | 2025-07-12 20:17:36 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:17:36.939911 | orchestrator | 2025-07-12 20:17:36 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:36.940300 | orchestrator | 2025-07-12 20:17:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:39.995255 | orchestrator | 2025-07-12 20:17:39 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:39.997006 | orchestrator | 2025-07-12 20:17:39 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:39.997058 | orchestrator | 2025-07-12 20:17:39 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:17:39.997592 | orchestrator | 2025-07-12 20:17:39 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:39.997616 | orchestrator | 2025-07-12 20:17:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:43.034736 | orchestrator | 2025-07-12 20:17:43 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:43.035019 | orchestrator | 2025-07-12 20:17:43 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:43.036007 | orchestrator | 2025-07-12 20:17:43 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:17:43.036674 | orchestrator | 2025-07-12 20:17:43 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:43.036818 | orchestrator | 2025-07-12 20:17:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:46.069554 | orchestrator | 2025-07-12 20:17:46 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:46.069653 | orchestrator | 2025-07-12 20:17:46 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:46.069687 | orchestrator | 2025-07-12 20:17:46 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:17:46.069700 | orchestrator | 2025-07-12 20:17:46 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:46.069711 | orchestrator | 2025-07-12 20:17:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:49.097694 | orchestrator | 2025-07-12 20:17:49 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:49.097746 | orchestrator | 2025-07-12 20:17:49 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:49.099677 | orchestrator | 2025-07-12 20:17:49 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:17:49.100840 | orchestrator | 2025-07-12 20:17:49 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:49.100911 | orchestrator | 2025-07-12 20:17:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:52.143286 | orchestrator | 2025-07-12 20:17:52 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:52.143967 | orchestrator | 2025-07-12 20:17:52 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:52.144699 | orchestrator | 2025-07-12 20:17:52 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:17:52.145695 | orchestrator | 2025-07-12 20:17:52 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:52.145731 | orchestrator | 2025-07-12 20:17:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:55.195326 | orchestrator | 2025-07-12 20:17:55 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:55.195501 | orchestrator | 2025-07-12 20:17:55 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:55.196629 | orchestrator | 2025-07-12 20:17:55 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:17:55.197911 | orchestrator | 2025-07-12 20:17:55 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:55.198001 | orchestrator | 2025-07-12 20:17:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:17:58.238128 | orchestrator | 2025-07-12 20:17:58 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:17:58.238993 | orchestrator | 2025-07-12 20:17:58 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:17:58.240936 | orchestrator | 2025-07-12 20:17:58 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:17:58.243246 | orchestrator | 2025-07-12 20:17:58 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:17:58.243333 | orchestrator | 2025-07-12 20:17:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:01.277360 | orchestrator | 2025-07-12 20:18:01 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:01.277454 | orchestrator | 2025-07-12 20:18:01 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:01.281151 | orchestrator | 2025-07-12 20:18:01 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:01.281204 | orchestrator | 2025-07-12 20:18:01 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:01.281218 | orchestrator | 2025-07-12 20:18:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:04.324934 | orchestrator | 2025-07-12 20:18:04 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:04.328941 | orchestrator | 2025-07-12 20:18:04 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:04.333917 | orchestrator | 2025-07-12 20:18:04 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:04.339937 | orchestrator | 2025-07-12 20:18:04 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:04.339975 | orchestrator | 2025-07-12 20:18:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:07.382829 | orchestrator | 2025-07-12 20:18:07 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:07.383093 | orchestrator | 2025-07-12 20:18:07 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:07.383977 | orchestrator | 2025-07-12 20:18:07 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:07.385003 | orchestrator | 2025-07-12 20:18:07 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:07.385031 | orchestrator | 2025-07-12 20:18:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:10.420626 | orchestrator | 2025-07-12 20:18:10 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:10.420983 | orchestrator | 2025-07-12 20:18:10 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:10.422545 | orchestrator | 2025-07-12 20:18:10 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:10.423027 | orchestrator | 2025-07-12 20:18:10 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:10.423060 | orchestrator | 2025-07-12 20:18:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:13.461031 | orchestrator | 2025-07-12 20:18:13 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:13.462282 | orchestrator | 2025-07-12 20:18:13 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:13.463521 | orchestrator | 2025-07-12 20:18:13 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:13.465251 | orchestrator | 2025-07-12 20:18:13 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:13.465301 | orchestrator | 2025-07-12 20:18:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:16.509616 | orchestrator | 2025-07-12 20:18:16 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:16.512193 | orchestrator | 2025-07-12 20:18:16 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:16.515748 | orchestrator | 2025-07-12 20:18:16 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:16.516435 | orchestrator | 2025-07-12 20:18:16 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:16.516472 | orchestrator | 2025-07-12 20:18:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:19.560950 | orchestrator | 2025-07-12 20:18:19 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:19.563618 | orchestrator | 2025-07-12 20:18:19 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:19.565357 | orchestrator | 2025-07-12 20:18:19 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:19.568757 | orchestrator | 2025-07-12 20:18:19 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:19.568901 | orchestrator | 2025-07-12 20:18:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:22.627075 | orchestrator | 2025-07-12 20:18:22 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:22.630320 | orchestrator | 2025-07-12 20:18:22 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:22.632989 | orchestrator | 2025-07-12 20:18:22 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:22.637571 | orchestrator | 2025-07-12 20:18:22 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:22.638079 | orchestrator | 2025-07-12 20:18:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:25.696072 | orchestrator | 2025-07-12 20:18:25 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:25.698181 | orchestrator | 2025-07-12 20:18:25 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:25.700296 | orchestrator | 2025-07-12 20:18:25 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:25.701720 | orchestrator | 2025-07-12 20:18:25 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:25.701757 | orchestrator | 2025-07-12 20:18:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:28.738972 | orchestrator | 2025-07-12 20:18:28 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:28.739396 | orchestrator | 2025-07-12 20:18:28 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:28.742808 | orchestrator | 2025-07-12 20:18:28 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:28.746075 | orchestrator | 2025-07-12 20:18:28 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:28.746103 | orchestrator | 2025-07-12 20:18:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:31.783124 | orchestrator | 2025-07-12 20:18:31 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:31.786663 | orchestrator | 2025-07-12 20:18:31 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:31.789462 | orchestrator | 2025-07-12 20:18:31 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:31.791772 | orchestrator | 2025-07-12 20:18:31 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:31.791837 | orchestrator | 2025-07-12 20:18:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:34.835120 | orchestrator | 2025-07-12 20:18:34 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:34.838199 | orchestrator | 2025-07-12 20:18:34 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:34.842205 | orchestrator | 2025-07-12 20:18:34 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:34.842281 | orchestrator | 2025-07-12 20:18:34 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:34.842521 | orchestrator | 2025-07-12 20:18:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:37.885622 | orchestrator | 2025-07-12 20:18:37 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:37.886228 | orchestrator | 2025-07-12 20:18:37 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:37.889108 | orchestrator | 2025-07-12 20:18:37 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:37.891609 | orchestrator | 2025-07-12 20:18:37 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:37.891649 | orchestrator | 2025-07-12 20:18:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:40.940471 | orchestrator | 2025-07-12 20:18:40 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:40.940591 | orchestrator | 2025-07-12 20:18:40 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:40.940871 | orchestrator | 2025-07-12 20:18:40 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:40.941654 | orchestrator | 2025-07-12 20:18:40 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:40.941708 | orchestrator | 2025-07-12 20:18:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:43.991748 | orchestrator | 2025-07-12 20:18:43 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:43.993809 | orchestrator | 2025-07-12 20:18:43 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:43.995506 | orchestrator | 2025-07-12 20:18:43 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:43.997326 | orchestrator | 2025-07-12 20:18:43 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:43.997381 | orchestrator | 2025-07-12 20:18:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:47.050961 | orchestrator | 2025-07-12 20:18:47 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:47.054101 | orchestrator | 2025-07-12 20:18:47 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:47.055647 | orchestrator | 2025-07-12 20:18:47 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:47.057762 | orchestrator | 2025-07-12 20:18:47 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:47.057843 | orchestrator | 2025-07-12 20:18:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:50.095979 | orchestrator | 2025-07-12 20:18:50 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:50.097962 | orchestrator | 2025-07-12 20:18:50 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:50.100346 | orchestrator | 2025-07-12 20:18:50 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:50.103166 | orchestrator | 2025-07-12 20:18:50 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:50.103228 | orchestrator | 2025-07-12 20:18:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:53.153271 | orchestrator | 2025-07-12 20:18:53 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:53.155243 | orchestrator | 2025-07-12 20:18:53 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:53.157592 | orchestrator | 2025-07-12 20:18:53 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:53.162151 | orchestrator | 2025-07-12 20:18:53 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:53.162201 | orchestrator | 2025-07-12 20:18:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:56.207333 | orchestrator | 2025-07-12 20:18:56 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:56.211609 | orchestrator | 2025-07-12 20:18:56 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:56.211635 | orchestrator | 2025-07-12 20:18:56 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:56.213647 | orchestrator | 2025-07-12 20:18:56 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:56.213839 | orchestrator | 2025-07-12 20:18:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:18:59.253661 | orchestrator | 2025-07-12 20:18:59 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:18:59.255798 | orchestrator | 2025-07-12 20:18:59 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:18:59.257546 | orchestrator | 2025-07-12 20:18:59 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:18:59.259364 | orchestrator | 2025-07-12 20:18:59 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:18:59.259397 | orchestrator | 2025-07-12 20:18:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:02.286415 | orchestrator | 2025-07-12 20:19:02 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:02.289948 | orchestrator | 2025-07-12 20:19:02 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:02.291003 | orchestrator | 2025-07-12 20:19:02 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:02.292274 | orchestrator | 2025-07-12 20:19:02 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:19:02.292351 | orchestrator | 2025-07-12 20:19:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:05.321663 | orchestrator | 2025-07-12 20:19:05 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:05.322654 | orchestrator | 2025-07-12 20:19:05 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:05.323649 | orchestrator | 2025-07-12 20:19:05 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:05.324534 | orchestrator | 2025-07-12 20:19:05 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:19:05.328590 | orchestrator | 2025-07-12 20:19:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:08.360866 | orchestrator | 2025-07-12 20:19:08 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:08.361511 | orchestrator | 2025-07-12 20:19:08 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:08.362537 | orchestrator | 2025-07-12 20:19:08 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:08.363595 | orchestrator | 2025-07-12 20:19:08 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state STARTED 2025-07-12 20:19:08.363630 | orchestrator | 2025-07-12 20:19:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:11.408835 | orchestrator | 2025-07-12 20:19:11 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:11.409851 | orchestrator | 2025-07-12 20:19:11 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:11.411355 | orchestrator | 2025-07-12 20:19:11 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:11.412400 | orchestrator | 2025-07-12 20:19:11 | INFO  | Task 01a5a458-a9cc-4370-8a37-d14b77403f90 is in state SUCCESS 2025-07-12 20:19:11.414013 | orchestrator | 2025-07-12 20:19:11.414099 | orchestrator | 2025-07-12 20:19:11.414129 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-07-12 20:19:11.414142 | orchestrator | 2025-07-12 20:19:11.414153 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-12 20:19:11.414165 | orchestrator | Saturday 12 July 2025 20:16:42 +0000 (0:00:00.159) 0:00:00.159 ********* 2025-07-12 20:19:11.414176 | orchestrator | ok: [localhost] => { 2025-07-12 20:19:11.414190 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-07-12 20:19:11.414201 | orchestrator | } 2025-07-12 20:19:11.414212 | orchestrator | 2025-07-12 20:19:11.414223 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-07-12 20:19:11.414234 | orchestrator | Saturday 12 July 2025 20:16:42 +0000 (0:00:00.083) 0:00:00.243 ********* 2025-07-12 20:19:11.414273 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-07-12 20:19:11.414286 | orchestrator | ...ignoring 2025-07-12 20:19:11.414297 | orchestrator | 2025-07-12 20:19:11.414308 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-07-12 20:19:11.414319 | orchestrator | Saturday 12 July 2025 20:16:47 +0000 (0:00:04.595) 0:00:04.838 ********* 2025-07-12 20:19:11.414329 | orchestrator | skipping: [localhost] 2025-07-12 20:19:11.414340 | orchestrator | 2025-07-12 20:19:11.414350 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-07-12 20:19:11.414361 | orchestrator | Saturday 12 July 2025 20:16:47 +0000 (0:00:00.094) 0:00:04.932 ********* 2025-07-12 20:19:11.414372 | orchestrator | ok: [localhost] 2025-07-12 20:19:11.414392 | orchestrator | 2025-07-12 20:19:11.414407 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:19:11.414418 | orchestrator | 2025-07-12 20:19:11.414429 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:19:11.414439 | orchestrator | Saturday 12 July 2025 20:16:47 +0000 (0:00:00.245) 0:00:05.178 ********* 2025-07-12 20:19:11.414450 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:11.414460 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:19:11.414471 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:19:11.414481 | orchestrator | 2025-07-12 20:19:11.414491 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:19:11.414503 | orchestrator | Saturday 12 July 2025 20:16:47 +0000 (0:00:00.345) 0:00:05.524 ********* 2025-07-12 20:19:11.414522 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-07-12 20:19:11.414535 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-07-12 20:19:11.414546 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-07-12 20:19:11.414560 | orchestrator | 2025-07-12 20:19:11.414577 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-07-12 20:19:11.414589 | orchestrator | 2025-07-12 20:19:11.414599 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 20:19:11.414612 | orchestrator | Saturday 12 July 2025 20:16:48 +0000 (0:00:00.512) 0:00:06.036 ********* 2025-07-12 20:19:11.414624 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:11.414637 | orchestrator | 2025-07-12 20:19:11.414649 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-12 20:19:11.414661 | orchestrator | Saturday 12 July 2025 20:16:48 +0000 (0:00:00.563) 0:00:06.600 ********* 2025-07-12 20:19:11.414673 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:11.414686 | orchestrator | 2025-07-12 20:19:11.414705 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-07-12 20:19:11.414721 | orchestrator | Saturday 12 July 2025 20:16:49 +0000 (0:00:01.017) 0:00:07.617 ********* 2025-07-12 20:19:11.414734 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:11.414747 | orchestrator | 2025-07-12 20:19:11.414799 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-07-12 20:19:11.414812 | orchestrator | Saturday 12 July 2025 20:16:50 +0000 (0:00:00.382) 0:00:08.000 ********* 2025-07-12 20:19:11.414824 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:11.414836 | orchestrator | 2025-07-12 20:19:11.414848 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-07-12 20:19:11.414860 | orchestrator | Saturday 12 July 2025 20:16:50 +0000 (0:00:00.378) 0:00:08.378 ********* 2025-07-12 20:19:11.414872 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:11.414884 | orchestrator | 2025-07-12 20:19:11.414895 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-07-12 20:19:11.414907 | orchestrator | Saturday 12 July 2025 20:16:51 +0000 (0:00:00.344) 0:00:08.723 ********* 2025-07-12 20:19:11.414919 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:11.414939 | orchestrator | 2025-07-12 20:19:11.414953 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 20:19:11.414965 | orchestrator | Saturday 12 July 2025 20:16:51 +0000 (0:00:00.571) 0:00:09.294 ********* 2025-07-12 20:19:11.414978 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:11.414990 | orchestrator | 2025-07-12 20:19:11.415002 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-07-12 20:19:11.415012 | orchestrator | Saturday 12 July 2025 20:16:53 +0000 (0:00:01.373) 0:00:10.667 ********* 2025-07-12 20:19:11.415024 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:11.415034 | orchestrator | 2025-07-12 20:19:11.415045 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-07-12 20:19:11.415055 | orchestrator | Saturday 12 July 2025 20:16:54 +0000 (0:00:01.177) 0:00:11.844 ********* 2025-07-12 20:19:11.415066 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:11.415076 | orchestrator | 2025-07-12 20:19:11.415090 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-07-12 20:19:11.415108 | orchestrator | Saturday 12 July 2025 20:16:54 +0000 (0:00:00.738) 0:00:12.583 ********* 2025-07-12 20:19:11.415125 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:11.415143 | orchestrator | 2025-07-12 20:19:11.415187 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-07-12 20:19:11.415209 | orchestrator | Saturday 12 July 2025 20:16:55 +0000 (0:00:00.935) 0:00:13.518 ********* 2025-07-12 20:19:11.415234 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:19:11.415263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:19:11.415288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:19:11.415320 | orchestrator | 2025-07-12 20:19:11.415340 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-07-12 20:19:11.415359 | orchestrator | Saturday 12 July 2025 20:16:57 +0000 (0:00:01.730) 0:00:15.249 ********* 2025-07-12 20:19:11.415401 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:19:11.415426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:19:11.415448 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:19:11.415469 | orchestrator | 2025-07-12 20:19:11.415480 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-07-12 20:19:11.415491 | orchestrator | Saturday 12 July 2025 20:17:03 +0000 (0:00:06.058) 0:00:21.307 ********* 2025-07-12 20:19:11.415508 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 20:19:11.415527 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 20:19:11.415544 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-07-12 20:19:11.415558 | orchestrator | 2025-07-12 20:19:11.415577 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-07-12 20:19:11.415596 | orchestrator | Saturday 12 July 2025 20:17:06 +0000 (0:00:02.807) 0:00:24.115 ********* 2025-07-12 20:19:11.415614 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 20:19:11.415625 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 20:19:11.415636 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-07-12 20:19:11.415646 | orchestrator | 2025-07-12 20:19:11.415657 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-07-12 20:19:11.415668 | orchestrator | Saturday 12 July 2025 20:17:10 +0000 (0:00:03.901) 0:00:28.016 ********* 2025-07-12 20:19:11.415678 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 20:19:11.415689 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 20:19:11.415699 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-07-12 20:19:11.415714 | orchestrator | 2025-07-12 20:19:11.415741 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-07-12 20:19:11.415793 | orchestrator | Saturday 12 July 2025 20:17:13 +0000 (0:00:03.075) 0:00:31.091 ********* 2025-07-12 20:19:11.415813 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 20:19:11.415833 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 20:19:11.415851 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-07-12 20:19:11.415870 | orchestrator | 2025-07-12 20:19:11.415882 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-07-12 20:19:11.415893 | orchestrator | Saturday 12 July 2025 20:17:16 +0000 (0:00:02.653) 0:00:33.744 ********* 2025-07-12 20:19:11.415903 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 20:19:11.415914 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 20:19:11.415924 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-07-12 20:19:11.415935 | orchestrator | 2025-07-12 20:19:11.415946 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-07-12 20:19:11.415956 | orchestrator | Saturday 12 July 2025 20:17:18 +0000 (0:00:02.559) 0:00:36.304 ********* 2025-07-12 20:19:11.415967 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 20:19:11.415977 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 20:19:11.415988 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-07-12 20:19:11.416009 | orchestrator | 2025-07-12 20:19:11.416020 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-07-12 20:19:11.416030 | orchestrator | Saturday 12 July 2025 20:17:21 +0000 (0:00:02.485) 0:00:38.790 ********* 2025-07-12 20:19:11.416041 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:11.416051 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:11.416062 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:11.416072 | orchestrator | 2025-07-12 20:19:11.416083 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-07-12 20:19:11.416093 | orchestrator | Saturday 12 July 2025 20:17:21 +0000 (0:00:00.678) 0:00:39.468 ********* 2025-07-12 20:19:11.416105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:19:11.416118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:19:11.416147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:19:11.416160 | orchestrator | 2025-07-12 20:19:11.416171 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-07-12 20:19:11.416189 | orchestrator | Saturday 12 July 2025 20:17:23 +0000 (0:00:01.668) 0:00:41.137 ********* 2025-07-12 20:19:11.416200 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:11.416211 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:11.416221 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:11.416232 | orchestrator | 2025-07-12 20:19:11.416242 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-07-12 20:19:11.416253 | orchestrator | Saturday 12 July 2025 20:17:24 +0000 (0:00:01.022) 0:00:42.160 ********* 2025-07-12 20:19:11.416264 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:11.416274 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:11.416285 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:11.416295 | orchestrator | 2025-07-12 20:19:11.416306 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-07-12 20:19:11.416316 | orchestrator | Saturday 12 July 2025 20:17:32 +0000 (0:00:07.608) 0:00:49.768 ********* 2025-07-12 20:19:11.416328 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:11.416347 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:11.416358 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:11.416369 | orchestrator | 2025-07-12 20:19:11.416380 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 20:19:11.416390 | orchestrator | 2025-07-12 20:19:11.416400 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 20:19:11.416411 | orchestrator | Saturday 12 July 2025 20:17:32 +0000 (0:00:00.290) 0:00:50.059 ********* 2025-07-12 20:19:11.416422 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:11.416432 | orchestrator | 2025-07-12 20:19:11.416443 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 20:19:11.416453 | orchestrator | Saturday 12 July 2025 20:17:32 +0000 (0:00:00.601) 0:00:50.660 ********* 2025-07-12 20:19:11.416464 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:19:11.416474 | orchestrator | 2025-07-12 20:19:11.416488 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 20:19:11.416504 | orchestrator | Saturday 12 July 2025 20:17:33 +0000 (0:00:00.219) 0:00:50.880 ********* 2025-07-12 20:19:11.416515 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:11.416526 | orchestrator | 2025-07-12 20:19:11.416545 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 20:19:11.416557 | orchestrator | Saturday 12 July 2025 20:17:34 +0000 (0:00:01.583) 0:00:52.463 ********* 2025-07-12 20:19:11.416567 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:19:11.416578 | orchestrator | 2025-07-12 20:19:11.416588 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 20:19:11.416601 | orchestrator | 2025-07-12 20:19:11.416620 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 20:19:11.416633 | orchestrator | Saturday 12 July 2025 20:18:30 +0000 (0:00:55.248) 0:01:47.712 ********* 2025-07-12 20:19:11.416644 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:19:11.416655 | orchestrator | 2025-07-12 20:19:11.416665 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 20:19:11.416676 | orchestrator | Saturday 12 July 2025 20:18:30 +0000 (0:00:00.540) 0:01:48.252 ********* 2025-07-12 20:19:11.416686 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:19:11.416696 | orchestrator | 2025-07-12 20:19:11.416707 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 20:19:11.416718 | orchestrator | Saturday 12 July 2025 20:18:30 +0000 (0:00:00.356) 0:01:48.609 ********* 2025-07-12 20:19:11.416728 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:11.416738 | orchestrator | 2025-07-12 20:19:11.416749 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 20:19:11.416829 | orchestrator | Saturday 12 July 2025 20:18:32 +0000 (0:00:01.871) 0:01:50.480 ********* 2025-07-12 20:19:11.416840 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:19:11.416850 | orchestrator | 2025-07-12 20:19:11.416861 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-07-12 20:19:11.416888 | orchestrator | 2025-07-12 20:19:11.416899 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-07-12 20:19:11.416910 | orchestrator | Saturday 12 July 2025 20:18:47 +0000 (0:00:15.136) 0:02:05.617 ********* 2025-07-12 20:19:11.416920 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:19:11.416931 | orchestrator | 2025-07-12 20:19:11.416941 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-07-12 20:19:11.416952 | orchestrator | Saturday 12 July 2025 20:18:48 +0000 (0:00:00.708) 0:02:06.326 ********* 2025-07-12 20:19:11.416962 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:19:11.416973 | orchestrator | 2025-07-12 20:19:11.416983 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-07-12 20:19:11.417001 | orchestrator | Saturday 12 July 2025 20:18:48 +0000 (0:00:00.302) 0:02:06.628 ********* 2025-07-12 20:19:11.417013 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:11.417023 | orchestrator | 2025-07-12 20:19:11.417035 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-07-12 20:19:11.417046 | orchestrator | Saturday 12 July 2025 20:18:50 +0000 (0:00:01.669) 0:02:08.297 ********* 2025-07-12 20:19:11.417056 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:19:11.417067 | orchestrator | 2025-07-12 20:19:11.417078 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-07-12 20:19:11.417088 | orchestrator | 2025-07-12 20:19:11.417099 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-07-12 20:19:11.417109 | orchestrator | Saturday 12 July 2025 20:19:04 +0000 (0:00:13.982) 0:02:22.280 ********* 2025-07-12 20:19:11.417120 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:19:11.417131 | orchestrator | 2025-07-12 20:19:11.417141 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-07-12 20:19:11.417152 | orchestrator | Saturday 12 July 2025 20:19:05 +0000 (0:00:01.011) 0:02:23.292 ********* 2025-07-12 20:19:11.417162 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 20:19:11.417172 | orchestrator | enable_outward_rabbitmq_True 2025-07-12 20:19:11.417183 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 20:19:11.417194 | orchestrator | outward_rabbitmq_restart 2025-07-12 20:19:11.417205 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:19:11.417215 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:19:11.417226 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:19:11.417236 | orchestrator | 2025-07-12 20:19:11.417247 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-07-12 20:19:11.417257 | orchestrator | skipping: no hosts matched 2025-07-12 20:19:11.417268 | orchestrator | 2025-07-12 20:19:11.417278 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-07-12 20:19:11.417289 | orchestrator | skipping: no hosts matched 2025-07-12 20:19:11.417299 | orchestrator | 2025-07-12 20:19:11.417310 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-07-12 20:19:11.417321 | orchestrator | skipping: no hosts matched 2025-07-12 20:19:11.417331 | orchestrator | 2025-07-12 20:19:11.417342 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:19:11.417353 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-12 20:19:11.417364 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 20:19:11.417375 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:19:11.417386 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:19:11.417403 | orchestrator | 2025-07-12 20:19:11.417414 | orchestrator | 2025-07-12 20:19:11.417424 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:19:11.417435 | orchestrator | Saturday 12 July 2025 20:19:08 +0000 (0:00:02.799) 0:02:26.091 ********* 2025-07-12 20:19:11.417445 | orchestrator | =============================================================================== 2025-07-12 20:19:11.417456 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 84.37s 2025-07-12 20:19:11.417466 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.61s 2025-07-12 20:19:11.417477 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 6.06s 2025-07-12 20:19:11.417487 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 5.12s 2025-07-12 20:19:11.417498 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.60s 2025-07-12 20:19:11.417508 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.90s 2025-07-12 20:19:11.417519 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.08s 2025-07-12 20:19:11.417529 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.81s 2025-07-12 20:19:11.417540 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.80s 2025-07-12 20:19:11.417550 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.65s 2025-07-12 20:19:11.417561 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.56s 2025-07-12 20:19:11.417571 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.49s 2025-07-12 20:19:11.417582 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.85s 2025-07-12 20:19:11.417592 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.73s 2025-07-12 20:19:11.417603 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.67s 2025-07-12 20:19:11.417613 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.37s 2025-07-12 20:19:11.417624 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.18s 2025-07-12 20:19:11.417635 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.02s 2025-07-12 20:19:11.417726 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.02s 2025-07-12 20:19:11.417816 | orchestrator | Include rabbitmq post-deploy.yml ---------------------------------------- 1.01s 2025-07-12 20:19:11.417830 | orchestrator | 2025-07-12 20:19:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:14.471134 | orchestrator | 2025-07-12 20:19:14 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:14.471247 | orchestrator | 2025-07-12 20:19:14 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:14.471666 | orchestrator | 2025-07-12 20:19:14 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:14.471691 | orchestrator | 2025-07-12 20:19:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:17.516055 | orchestrator | 2025-07-12 20:19:17 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:17.518327 | orchestrator | 2025-07-12 20:19:17 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:17.520262 | orchestrator | 2025-07-12 20:19:17 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:17.520350 | orchestrator | 2025-07-12 20:19:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:20.567964 | orchestrator | 2025-07-12 20:19:20 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:20.568556 | orchestrator | 2025-07-12 20:19:20 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:20.569664 | orchestrator | 2025-07-12 20:19:20 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:20.569817 | orchestrator | 2025-07-12 20:19:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:23.609629 | orchestrator | 2025-07-12 20:19:23 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:23.611116 | orchestrator | 2025-07-12 20:19:23 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:23.612936 | orchestrator | 2025-07-12 20:19:23 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:23.613002 | orchestrator | 2025-07-12 20:19:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:26.670231 | orchestrator | 2025-07-12 20:19:26 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:26.670909 | orchestrator | 2025-07-12 20:19:26 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:26.671874 | orchestrator | 2025-07-12 20:19:26 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:26.671913 | orchestrator | 2025-07-12 20:19:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:29.726570 | orchestrator | 2025-07-12 20:19:29 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:29.726645 | orchestrator | 2025-07-12 20:19:29 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:29.728668 | orchestrator | 2025-07-12 20:19:29 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:29.728850 | orchestrator | 2025-07-12 20:19:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:32.815696 | orchestrator | 2025-07-12 20:19:32 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:32.816311 | orchestrator | 2025-07-12 20:19:32 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:32.816813 | orchestrator | 2025-07-12 20:19:32 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:32.817040 | orchestrator | 2025-07-12 20:19:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:35.876028 | orchestrator | 2025-07-12 20:19:35 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:35.876125 | orchestrator | 2025-07-12 20:19:35 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:35.877158 | orchestrator | 2025-07-12 20:19:35 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:35.878575 | orchestrator | 2025-07-12 20:19:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:38.937131 | orchestrator | 2025-07-12 20:19:38 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:38.938501 | orchestrator | 2025-07-12 20:19:38 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:38.939616 | orchestrator | 2025-07-12 20:19:38 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:38.940405 | orchestrator | 2025-07-12 20:19:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:41.980179 | orchestrator | 2025-07-12 20:19:41 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:41.981170 | orchestrator | 2025-07-12 20:19:41 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:41.983056 | orchestrator | 2025-07-12 20:19:41 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:41.983134 | orchestrator | 2025-07-12 20:19:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:45.029919 | orchestrator | 2025-07-12 20:19:45 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:45.030726 | orchestrator | 2025-07-12 20:19:45 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:45.034232 | orchestrator | 2025-07-12 20:19:45 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:45.034270 | orchestrator | 2025-07-12 20:19:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:48.085136 | orchestrator | 2025-07-12 20:19:48 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:48.086851 | orchestrator | 2025-07-12 20:19:48 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:48.089105 | orchestrator | 2025-07-12 20:19:48 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:48.089496 | orchestrator | 2025-07-12 20:19:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:51.144919 | orchestrator | 2025-07-12 20:19:51 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:51.147233 | orchestrator | 2025-07-12 20:19:51 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:51.150958 | orchestrator | 2025-07-12 20:19:51 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:51.151015 | orchestrator | 2025-07-12 20:19:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:54.208980 | orchestrator | 2025-07-12 20:19:54 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:54.209728 | orchestrator | 2025-07-12 20:19:54 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:54.210929 | orchestrator | 2025-07-12 20:19:54 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:54.210972 | orchestrator | 2025-07-12 20:19:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:19:57.262583 | orchestrator | 2025-07-12 20:19:57 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:19:57.266360 | orchestrator | 2025-07-12 20:19:57 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:19:57.269087 | orchestrator | 2025-07-12 20:19:57 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:19:57.269131 | orchestrator | 2025-07-12 20:19:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:00.317329 | orchestrator | 2025-07-12 20:20:00 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:00.324203 | orchestrator | 2025-07-12 20:20:00 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:00.329429 | orchestrator | 2025-07-12 20:20:00 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:20:00.329490 | orchestrator | 2025-07-12 20:20:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:03.374373 | orchestrator | 2025-07-12 20:20:03 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:03.374631 | orchestrator | 2025-07-12 20:20:03 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:03.375465 | orchestrator | 2025-07-12 20:20:03 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:20:03.375538 | orchestrator | 2025-07-12 20:20:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:06.434744 | orchestrator | 2025-07-12 20:20:06 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:06.434831 | orchestrator | 2025-07-12 20:20:06 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:06.434841 | orchestrator | 2025-07-12 20:20:06 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state STARTED 2025-07-12 20:20:06.434848 | orchestrator | 2025-07-12 20:20:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:09.482295 | orchestrator | 2025-07-12 20:20:09 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:09.488724 | orchestrator | 2025-07-12 20:20:09 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:09.492814 | orchestrator | 2025-07-12 20:20:09 | INFO  | Task 1c68ce45-f059-4b0e-b1cd-4eee82536411 is in state SUCCESS 2025-07-12 20:20:09.496817 | orchestrator | 2025-07-12 20:20:09.496870 | orchestrator | 2025-07-12 20:20:09.496884 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:20:09.496895 | orchestrator | 2025-07-12 20:20:09.496906 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:20:09.496917 | orchestrator | Saturday 12 July 2025 20:17:39 +0000 (0:00:00.254) 0:00:00.254 ********* 2025-07-12 20:20:09.496928 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:20:09.496939 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:20:09.496945 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:20:09.496951 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.496957 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.496964 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.496970 | orchestrator | 2025-07-12 20:20:09.496976 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:20:09.496983 | orchestrator | Saturday 12 July 2025 20:17:40 +0000 (0:00:00.690) 0:00:00.945 ********* 2025-07-12 20:20:09.496989 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-07-12 20:20:09.496996 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-07-12 20:20:09.497002 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-07-12 20:20:09.497008 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-07-12 20:20:09.497014 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-07-12 20:20:09.497020 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-07-12 20:20:09.497027 | orchestrator | 2025-07-12 20:20:09.497033 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-07-12 20:20:09.497039 | orchestrator | 2025-07-12 20:20:09.497045 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-07-12 20:20:09.497051 | orchestrator | Saturday 12 July 2025 20:17:41 +0000 (0:00:01.447) 0:00:02.392 ********* 2025-07-12 20:20:09.497059 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:20:09.497067 | orchestrator | 2025-07-12 20:20:09.497073 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-07-12 20:20:09.497079 | orchestrator | Saturday 12 July 2025 20:17:42 +0000 (0:00:01.294) 0:00:03.687 ********* 2025-07-12 20:20:09.497088 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497097 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497136 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497188 | orchestrator | 2025-07-12 20:20:09.497201 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-07-12 20:20:09.497212 | orchestrator | Saturday 12 July 2025 20:17:44 +0000 (0:00:01.331) 0:00:05.018 ********* 2025-07-12 20:20:09.497223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497235 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497247 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497259 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497329 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497336 | orchestrator | 2025-07-12 20:20:09.497343 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-07-12 20:20:09.497351 | orchestrator | Saturday 12 July 2025 20:17:45 +0000 (0:00:01.725) 0:00:06.744 ********* 2025-07-12 20:20:09.497358 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497370 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497386 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497424 | orchestrator | 2025-07-12 20:20:09.497432 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-07-12 20:20:09.497440 | orchestrator | Saturday 12 July 2025 20:17:47 +0000 (0:00:01.358) 0:00:08.102 ********* 2025-07-12 20:20:09.497448 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497457 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497600 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497610 | orchestrator | 2025-07-12 20:20:09.497619 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-07-12 20:20:09.497627 | orchestrator | Saturday 12 July 2025 20:17:48 +0000 (0:00:01.602) 0:00:09.705 ********* 2025-07-12 20:20:09.497634 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497647 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497759 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.497766 | orchestrator | 2025-07-12 20:20:09.497773 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-07-12 20:20:09.497781 | orchestrator | Saturday 12 July 2025 20:17:50 +0000 (0:00:01.595) 0:00:11.301 ********* 2025-07-12 20:20:09.497788 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:20:09.497797 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:20:09.497804 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:20:09.497811 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:20:09.497818 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:20:09.497825 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:20:09.497832 | orchestrator | 2025-07-12 20:20:09.497839 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-07-12 20:20:09.497847 | orchestrator | Saturday 12 July 2025 20:17:53 +0000 (0:00:02.738) 0:00:14.039 ********* 2025-07-12 20:20:09.497859 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-07-12 20:20:09.497866 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-07-12 20:20:09.497873 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-07-12 20:20:09.497886 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-07-12 20:20:09.497894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-07-12 20:20:09.497901 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-07-12 20:20:09.497908 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:20:09.497921 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:20:09.497933 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:20:09.497945 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:20:09.497957 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:20:09.497968 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-07-12 20:20:09.497980 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:20:09.497993 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:20:09.498006 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:20:09.498073 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:20:09.498090 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:20:09.498102 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-07-12 20:20:09.498114 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:20:09.498128 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:20:09.498140 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:20:09.498152 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:20:09.498163 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:20:09.498175 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-07-12 20:20:09.498187 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:20:09.498199 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:20:09.498228 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:20:09.498243 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:20:09.498255 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:20:09.498268 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-07-12 20:20:09.498280 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:20:09.498293 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:20:09.498306 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:20:09.498319 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:20:09.498332 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:20:09.498345 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-07-12 20:20:09.498359 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 20:20:09.498382 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 20:20:09.498401 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-07-12 20:20:09.498414 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 20:20:09.498435 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 20:20:09.498447 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-07-12 20:20:09.498458 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-07-12 20:20:09.498471 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-07-12 20:20:09.498482 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-07-12 20:20:09.498494 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-07-12 20:20:09.498506 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-07-12 20:20:09.498518 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-07-12 20:20:09.498530 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 20:20:09.498542 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 20:20:09.498555 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-07-12 20:20:09.498568 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 20:20:09.498581 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 20:20:09.498593 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-07-12 20:20:09.498607 | orchestrator | 2025-07-12 20:20:09.498620 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:20:09.498633 | orchestrator | Saturday 12 July 2025 20:18:11 +0000 (0:00:18.166) 0:00:32.206 ********* 2025-07-12 20:20:09.498646 | orchestrator | 2025-07-12 20:20:09.498659 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:20:09.498692 | orchestrator | Saturday 12 July 2025 20:18:11 +0000 (0:00:00.088) 0:00:32.295 ********* 2025-07-12 20:20:09.498706 | orchestrator | 2025-07-12 20:20:09.498717 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:20:09.498729 | orchestrator | Saturday 12 July 2025 20:18:11 +0000 (0:00:00.130) 0:00:32.425 ********* 2025-07-12 20:20:09.498741 | orchestrator | 2025-07-12 20:20:09.498753 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:20:09.498765 | orchestrator | Saturday 12 July 2025 20:18:11 +0000 (0:00:00.082) 0:00:32.507 ********* 2025-07-12 20:20:09.498777 | orchestrator | 2025-07-12 20:20:09.498790 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:20:09.498801 | orchestrator | Saturday 12 July 2025 20:18:11 +0000 (0:00:00.058) 0:00:32.565 ********* 2025-07-12 20:20:09.498822 | orchestrator | 2025-07-12 20:20:09.498834 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-07-12 20:20:09.498846 | orchestrator | Saturday 12 July 2025 20:18:11 +0000 (0:00:00.058) 0:00:32.624 ********* 2025-07-12 20:20:09.498857 | orchestrator | 2025-07-12 20:20:09.498870 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-07-12 20:20:09.498882 | orchestrator | Saturday 12 July 2025 20:18:11 +0000 (0:00:00.060) 0:00:32.685 ********* 2025-07-12 20:20:09.498893 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:20:09.498906 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:20:09.498918 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.498930 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.498941 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:20:09.498953 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.498966 | orchestrator | 2025-07-12 20:20:09.498977 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-07-12 20:20:09.498989 | orchestrator | Saturday 12 July 2025 20:18:13 +0000 (0:00:02.127) 0:00:34.812 ********* 2025-07-12 20:20:09.499001 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:20:09.499013 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:20:09.499025 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:20:09.499037 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:20:09.499049 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:20:09.499061 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:20:09.499073 | orchestrator | 2025-07-12 20:20:09.499085 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-07-12 20:20:09.499097 | orchestrator | 2025-07-12 20:20:09.499108 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 20:20:09.499120 | orchestrator | Saturday 12 July 2025 20:18:53 +0000 (0:00:39.677) 0:01:14.490 ********* 2025-07-12 20:20:09.499138 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:20:09.499151 | orchestrator | 2025-07-12 20:20:09.499162 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 20:20:09.499174 | orchestrator | Saturday 12 July 2025 20:18:54 +0000 (0:00:00.526) 0:01:15.016 ********* 2025-07-12 20:20:09.499186 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:20:09.499198 | orchestrator | 2025-07-12 20:20:09.499216 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-07-12 20:20:09.499229 | orchestrator | Saturday 12 July 2025 20:18:54 +0000 (0:00:00.768) 0:01:15.785 ********* 2025-07-12 20:20:09.499241 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.499253 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.499265 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.499277 | orchestrator | 2025-07-12 20:20:09.499289 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-07-12 20:20:09.499301 | orchestrator | Saturday 12 July 2025 20:18:55 +0000 (0:00:00.811) 0:01:16.597 ********* 2025-07-12 20:20:09.499313 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.499325 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.499337 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.499349 | orchestrator | 2025-07-12 20:20:09.499361 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-07-12 20:20:09.499373 | orchestrator | Saturday 12 July 2025 20:18:56 +0000 (0:00:00.351) 0:01:16.948 ********* 2025-07-12 20:20:09.499385 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.499397 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.499409 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.499420 | orchestrator | 2025-07-12 20:20:09.499432 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-07-12 20:20:09.499444 | orchestrator | Saturday 12 July 2025 20:18:56 +0000 (0:00:00.305) 0:01:17.254 ********* 2025-07-12 20:20:09.499456 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.499467 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.499495 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.499508 | orchestrator | 2025-07-12 20:20:09.499520 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-07-12 20:20:09.499532 | orchestrator | Saturday 12 July 2025 20:18:56 +0000 (0:00:00.431) 0:01:17.686 ********* 2025-07-12 20:20:09.499543 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.499555 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.499566 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.499576 | orchestrator | 2025-07-12 20:20:09.499589 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-07-12 20:20:09.499600 | orchestrator | Saturday 12 July 2025 20:18:57 +0000 (0:00:00.288) 0:01:17.974 ********* 2025-07-12 20:20:09.499612 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.499624 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.499636 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.499648 | orchestrator | 2025-07-12 20:20:09.499660 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-07-12 20:20:09.499687 | orchestrator | Saturday 12 July 2025 20:18:57 +0000 (0:00:00.259) 0:01:18.234 ********* 2025-07-12 20:20:09.499700 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.499713 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.499725 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.499736 | orchestrator | 2025-07-12 20:20:09.499748 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-07-12 20:20:09.499756 | orchestrator | Saturday 12 July 2025 20:18:57 +0000 (0:00:00.254) 0:01:18.488 ********* 2025-07-12 20:20:09.499763 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.499770 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.499777 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.499785 | orchestrator | 2025-07-12 20:20:09.499792 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-07-12 20:20:09.499799 | orchestrator | Saturday 12 July 2025 20:18:58 +0000 (0:00:00.518) 0:01:19.006 ********* 2025-07-12 20:20:09.499806 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.499813 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.499820 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.499896 | orchestrator | 2025-07-12 20:20:09.499906 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-07-12 20:20:09.499913 | orchestrator | Saturday 12 July 2025 20:18:58 +0000 (0:00:00.316) 0:01:19.323 ********* 2025-07-12 20:20:09.499921 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.499928 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.499935 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.499942 | orchestrator | 2025-07-12 20:20:09.499949 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-07-12 20:20:09.499956 | orchestrator | Saturday 12 July 2025 20:18:58 +0000 (0:00:00.314) 0:01:19.638 ********* 2025-07-12 20:20:09.499963 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.499970 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.499977 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.499984 | orchestrator | 2025-07-12 20:20:09.499991 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-07-12 20:20:09.499999 | orchestrator | Saturday 12 July 2025 20:18:59 +0000 (0:00:00.301) 0:01:19.939 ********* 2025-07-12 20:20:09.500006 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500013 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500020 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500032 | orchestrator | 2025-07-12 20:20:09.500044 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-07-12 20:20:09.500056 | orchestrator | Saturday 12 July 2025 20:18:59 +0000 (0:00:00.451) 0:01:20.390 ********* 2025-07-12 20:20:09.500067 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500085 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500098 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500124 | orchestrator | 2025-07-12 20:20:09.500136 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-07-12 20:20:09.500147 | orchestrator | Saturday 12 July 2025 20:18:59 +0000 (0:00:00.274) 0:01:20.665 ********* 2025-07-12 20:20:09.500160 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500167 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500174 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500181 | orchestrator | 2025-07-12 20:20:09.500195 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-07-12 20:20:09.500202 | orchestrator | Saturday 12 July 2025 20:19:00 +0000 (0:00:00.347) 0:01:21.012 ********* 2025-07-12 20:20:09.500209 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500216 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500223 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500230 | orchestrator | 2025-07-12 20:20:09.500245 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-07-12 20:20:09.500253 | orchestrator | Saturday 12 July 2025 20:19:00 +0000 (0:00:00.281) 0:01:21.293 ********* 2025-07-12 20:20:09.500260 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500267 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500274 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500281 | orchestrator | 2025-07-12 20:20:09.500288 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-07-12 20:20:09.500295 | orchestrator | Saturday 12 July 2025 20:19:00 +0000 (0:00:00.439) 0:01:21.732 ********* 2025-07-12 20:20:09.500303 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500310 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500317 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500324 | orchestrator | 2025-07-12 20:20:09.500331 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-07-12 20:20:09.500338 | orchestrator | Saturday 12 July 2025 20:19:01 +0000 (0:00:00.276) 0:01:22.009 ********* 2025-07-12 20:20:09.500345 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:20:09.500352 | orchestrator | 2025-07-12 20:20:09.500359 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-07-12 20:20:09.500366 | orchestrator | Saturday 12 July 2025 20:19:01 +0000 (0:00:00.513) 0:01:22.523 ********* 2025-07-12 20:20:09.500373 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.500381 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.500388 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.500395 | orchestrator | 2025-07-12 20:20:09.500402 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-07-12 20:20:09.500410 | orchestrator | Saturday 12 July 2025 20:19:02 +0000 (0:00:00.722) 0:01:23.245 ********* 2025-07-12 20:20:09.500417 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.500424 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.500431 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.500438 | orchestrator | 2025-07-12 20:20:09.500445 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-07-12 20:20:09.500452 | orchestrator | Saturday 12 July 2025 20:19:02 +0000 (0:00:00.448) 0:01:23.694 ********* 2025-07-12 20:20:09.500459 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500466 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500473 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500480 | orchestrator | 2025-07-12 20:20:09.500488 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-07-12 20:20:09.500495 | orchestrator | Saturday 12 July 2025 20:19:03 +0000 (0:00:00.323) 0:01:24.017 ********* 2025-07-12 20:20:09.500502 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500509 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500516 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500523 | orchestrator | 2025-07-12 20:20:09.500531 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-07-12 20:20:09.500543 | orchestrator | Saturday 12 July 2025 20:19:03 +0000 (0:00:00.405) 0:01:24.423 ********* 2025-07-12 20:20:09.500551 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500558 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500565 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500572 | orchestrator | 2025-07-12 20:20:09.500579 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-07-12 20:20:09.500586 | orchestrator | Saturday 12 July 2025 20:19:04 +0000 (0:00:00.500) 0:01:24.924 ********* 2025-07-12 20:20:09.500593 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500600 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500607 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500614 | orchestrator | 2025-07-12 20:20:09.500622 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-07-12 20:20:09.500629 | orchestrator | Saturday 12 July 2025 20:19:04 +0000 (0:00:00.414) 0:01:25.338 ********* 2025-07-12 20:20:09.500636 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500643 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500650 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500657 | orchestrator | 2025-07-12 20:20:09.500664 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-07-12 20:20:09.500689 | orchestrator | Saturday 12 July 2025 20:19:04 +0000 (0:00:00.479) 0:01:25.818 ********* 2025-07-12 20:20:09.500698 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.500705 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.500712 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.500719 | orchestrator | 2025-07-12 20:20:09.500726 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-12 20:20:09.500733 | orchestrator | Saturday 12 July 2025 20:19:05 +0000 (0:00:00.462) 0:01:26.281 ********* 2025-07-12 20:20:09.500741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/k2025-07-12 20:20:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:09.500791 | orchestrator | olla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500819 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500887 | orchestrator | 2025-07-12 20:20:09.500899 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-12 20:20:09.500911 | orchestrator | Saturday 12 July 2025 20:19:07 +0000 (0:00:01.808) 0:01:28.089 ********* 2025-07-12 20:20:09.500920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.500993 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501069 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501087 | orchestrator | 2025-07-12 20:20:09.501119 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-12 20:20:09.501134 | orchestrator | Saturday 12 July 2025 20:19:12 +0000 (0:00:04.798) 0:01:32.888 ********* 2025-07-12 20:20:09.501146 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501207 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.501298 | orchestrator | 2025-07-12 20:20:09.501310 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:20:09.501323 | orchestrator | Saturday 12 July 2025 20:19:14 +0000 (0:00:02.234) 0:01:35.123 ********* 2025-07-12 20:20:09.501335 | orchestrator | 2025-07-12 20:20:09.501347 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:20:09.501359 | orchestrator | Saturday 12 July 2025 20:19:14 +0000 (0:00:00.071) 0:01:35.194 ********* 2025-07-12 20:20:09.501371 | orchestrator | 2025-07-12 20:20:09.501383 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:20:09.501394 | orchestrator | Saturday 12 July 2025 20:19:14 +0000 (0:00:00.111) 0:01:35.306 ********* 2025-07-12 20:20:09.501406 | orchestrator | 2025-07-12 20:20:09.501418 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-12 20:20:09.501430 | orchestrator | Saturday 12 July 2025 20:19:14 +0000 (0:00:00.076) 0:01:35.383 ********* 2025-07-12 20:20:09.501442 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:20:09.501455 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:20:09.501467 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:20:09.501479 | orchestrator | 2025-07-12 20:20:09.501492 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-12 20:20:09.501503 | orchestrator | Saturday 12 July 2025 20:19:17 +0000 (0:00:03.211) 0:01:38.594 ********* 2025-07-12 20:20:09.501515 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:20:09.501527 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:20:09.501539 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:20:09.501551 | orchestrator | 2025-07-12 20:20:09.501563 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-12 20:20:09.501574 | orchestrator | Saturday 12 July 2025 20:19:20 +0000 (0:00:02.544) 0:01:41.139 ********* 2025-07-12 20:20:09.501587 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:20:09.501610 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:20:09.501622 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:20:09.501635 | orchestrator | 2025-07-12 20:20:09.501647 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-12 20:20:09.501665 | orchestrator | Saturday 12 July 2025 20:19:28 +0000 (0:00:07.861) 0:01:49.001 ********* 2025-07-12 20:20:09.501706 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.501719 | orchestrator | 2025-07-12 20:20:09.501733 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-12 20:20:09.501745 | orchestrator | Saturday 12 July 2025 20:19:28 +0000 (0:00:00.126) 0:01:49.127 ********* 2025-07-12 20:20:09.501758 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.501771 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.501792 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.501805 | orchestrator | 2025-07-12 20:20:09.501817 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-12 20:20:09.501828 | orchestrator | Saturday 12 July 2025 20:19:29 +0000 (0:00:00.786) 0:01:49.914 ********* 2025-07-12 20:20:09.501841 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.501853 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.501865 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:20:09.501878 | orchestrator | 2025-07-12 20:20:09.501890 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-12 20:20:09.501902 | orchestrator | Saturday 12 July 2025 20:19:29 +0000 (0:00:00.633) 0:01:50.547 ********* 2025-07-12 20:20:09.501915 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.501928 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.501940 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.501952 | orchestrator | 2025-07-12 20:20:09.501964 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-12 20:20:09.501977 | orchestrator | Saturday 12 July 2025 20:19:30 +0000 (0:00:00.799) 0:01:51.347 ********* 2025-07-12 20:20:09.501989 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.502002 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.502047 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:20:09.502063 | orchestrator | 2025-07-12 20:20:09.502076 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-12 20:20:09.502089 | orchestrator | Saturday 12 July 2025 20:19:31 +0000 (0:00:00.688) 0:01:52.035 ********* 2025-07-12 20:20:09.502102 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.502114 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.502126 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.502139 | orchestrator | 2025-07-12 20:20:09.502152 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-12 20:20:09.502171 | orchestrator | Saturday 12 July 2025 20:19:32 +0000 (0:00:01.170) 0:01:53.206 ********* 2025-07-12 20:20:09.502184 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.502197 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.502217 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.502230 | orchestrator | 2025-07-12 20:20:09.502243 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-07-12 20:20:09.502255 | orchestrator | Saturday 12 July 2025 20:19:33 +0000 (0:00:01.105) 0:01:54.311 ********* 2025-07-12 20:20:09.502268 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.502279 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.502292 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.502304 | orchestrator | 2025-07-12 20:20:09.502317 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-07-12 20:20:09.502328 | orchestrator | Saturday 12 July 2025 20:19:33 +0000 (0:00:00.328) 0:01:54.639 ********* 2025-07-12 20:20:09.502341 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502365 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502377 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502390 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502403 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502431 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502445 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502458 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502470 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502483 | orchestrator | 2025-07-12 20:20:09.502496 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-07-12 20:20:09.502508 | orchestrator | Saturday 12 July 2025 20:19:35 +0000 (0:00:01.455) 0:01:56.095 ********* 2025-07-12 20:20:09.502521 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502543 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502556 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502598 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502753 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502778 | orchestrator | 2025-07-12 20:20:09.502791 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-07-12 20:20:09.502803 | orchestrator | Saturday 12 July 2025 20:19:39 +0000 (0:00:04.641) 0:02:00.736 ********* 2025-07-12 20:20:09.502816 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502841 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502855 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502881 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502913 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502936 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:20:09.502963 | orchestrator | 2025-07-12 20:20:09.502975 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:20:09.502988 | orchestrator | Saturday 12 July 2025 20:19:43 +0000 (0:00:03.305) 0:02:04.042 ********* 2025-07-12 20:20:09.503000 | orchestrator | 2025-07-12 20:20:09.503013 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:20:09.503034 | orchestrator | Saturday 12 July 2025 20:19:43 +0000 (0:00:00.065) 0:02:04.107 ********* 2025-07-12 20:20:09.503047 | orchestrator | 2025-07-12 20:20:09.503059 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-07-12 20:20:09.503072 | orchestrator | Saturday 12 July 2025 20:19:43 +0000 (0:00:00.064) 0:02:04.172 ********* 2025-07-12 20:20:09.503084 | orchestrator | 2025-07-12 20:20:09.503096 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-07-12 20:20:09.503109 | orchestrator | Saturday 12 July 2025 20:19:43 +0000 (0:00:00.064) 0:02:04.237 ********* 2025-07-12 20:20:09.503122 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:20:09.503134 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:20:09.503146 | orchestrator | 2025-07-12 20:20:09.503159 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-07-12 20:20:09.503171 | orchestrator | Saturday 12 July 2025 20:19:49 +0000 (0:00:06.228) 0:02:10.465 ********* 2025-07-12 20:20:09.503184 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:20:09.503197 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:20:09.503210 | orchestrator | 2025-07-12 20:20:09.503222 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-07-12 20:20:09.503234 | orchestrator | Saturday 12 July 2025 20:19:55 +0000 (0:00:06.177) 0:02:16.643 ********* 2025-07-12 20:20:09.503247 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:20:09.503266 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:20:09.503279 | orchestrator | 2025-07-12 20:20:09.503297 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-07-12 20:20:09.503311 | orchestrator | Saturday 12 July 2025 20:20:02 +0000 (0:00:06.210) 0:02:22.854 ********* 2025-07-12 20:20:09.503321 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:20:09.503334 | orchestrator | 2025-07-12 20:20:09.503347 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-07-12 20:20:09.503360 | orchestrator | Saturday 12 July 2025 20:20:02 +0000 (0:00:00.146) 0:02:23.001 ********* 2025-07-12 20:20:09.503372 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.503384 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.503396 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.503409 | orchestrator | 2025-07-12 20:20:09.503421 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-07-12 20:20:09.503434 | orchestrator | Saturday 12 July 2025 20:20:03 +0000 (0:00:01.071) 0:02:24.072 ********* 2025-07-12 20:20:09.503446 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.503459 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.503471 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:20:09.503483 | orchestrator | 2025-07-12 20:20:09.503496 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-07-12 20:20:09.503508 | orchestrator | Saturday 12 July 2025 20:20:03 +0000 (0:00:00.594) 0:02:24.667 ********* 2025-07-12 20:20:09.503521 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.503534 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.503546 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.503558 | orchestrator | 2025-07-12 20:20:09.503570 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-07-12 20:20:09.503583 | orchestrator | Saturday 12 July 2025 20:20:04 +0000 (0:00:00.715) 0:02:25.382 ********* 2025-07-12 20:20:09.503595 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:20:09.503607 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:20:09.503620 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:20:09.503632 | orchestrator | 2025-07-12 20:20:09.503645 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-07-12 20:20:09.503657 | orchestrator | Saturday 12 July 2025 20:20:05 +0000 (0:00:00.594) 0:02:25.977 ********* 2025-07-12 20:20:09.503688 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.503701 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.503713 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.503725 | orchestrator | 2025-07-12 20:20:09.503737 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-07-12 20:20:09.503759 | orchestrator | Saturday 12 July 2025 20:20:06 +0000 (0:00:01.191) 0:02:27.169 ********* 2025-07-12 20:20:09.503771 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:20:09.503783 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:20:09.503794 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:20:09.503806 | orchestrator | 2025-07-12 20:20:09.503825 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:20:09.503838 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 20:20:09.503859 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-12 20:20:09.503872 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-07-12 20:20:09.503884 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:20:09.503896 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:20:09.503908 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:20:09.503920 | orchestrator | 2025-07-12 20:20:09.503932 | orchestrator | 2025-07-12 20:20:09.503944 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:20:09.503957 | orchestrator | Saturday 12 July 2025 20:20:07 +0000 (0:00:00.956) 0:02:28.126 ********* 2025-07-12 20:20:09.503969 | orchestrator | =============================================================================== 2025-07-12 20:20:09.503980 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 39.68s 2025-07-12 20:20:09.503993 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.17s 2025-07-12 20:20:09.504004 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.07s 2025-07-12 20:20:09.504016 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.44s 2025-07-12 20:20:09.504028 | orchestrator | ovn-db : Restart ovn-sb-db container ------------------------------------ 8.72s 2025-07-12 20:20:09.504040 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.80s 2025-07-12 20:20:09.504052 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.64s 2025-07-12 20:20:09.504064 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.31s 2025-07-12 20:20:09.504076 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.74s 2025-07-12 20:20:09.504088 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.23s 2025-07-12 20:20:09.504099 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.13s 2025-07-12 20:20:09.504111 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.81s 2025-07-12 20:20:09.504123 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.73s 2025-07-12 20:20:09.504135 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.60s 2025-07-12 20:20:09.504147 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.60s 2025-07-12 20:20:09.504160 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.46s 2025-07-12 20:20:09.504171 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.45s 2025-07-12 20:20:09.504184 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.36s 2025-07-12 20:20:09.504195 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.33s 2025-07-12 20:20:09.504208 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.29s 2025-07-12 20:20:12.553476 | orchestrator | 2025-07-12 20:20:12 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:12.556511 | orchestrator | 2025-07-12 20:20:12 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:12.557018 | orchestrator | 2025-07-12 20:20:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:15.607434 | orchestrator | 2025-07-12 20:20:15 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:15.610102 | orchestrator | 2025-07-12 20:20:15 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:15.610143 | orchestrator | 2025-07-12 20:20:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:18.659423 | orchestrator | 2025-07-12 20:20:18 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:18.665763 | orchestrator | 2025-07-12 20:20:18 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:18.665854 | orchestrator | 2025-07-12 20:20:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:21.700529 | orchestrator | 2025-07-12 20:20:21 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:21.702269 | orchestrator | 2025-07-12 20:20:21 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:21.702391 | orchestrator | 2025-07-12 20:20:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:24.744910 | orchestrator | 2025-07-12 20:20:24 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:24.746854 | orchestrator | 2025-07-12 20:20:24 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:24.746890 | orchestrator | 2025-07-12 20:20:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:27.792150 | orchestrator | 2025-07-12 20:20:27 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:27.792916 | orchestrator | 2025-07-12 20:20:27 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:27.792954 | orchestrator | 2025-07-12 20:20:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:30.863603 | orchestrator | 2025-07-12 20:20:30 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:30.863705 | orchestrator | 2025-07-12 20:20:30 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:30.863715 | orchestrator | 2025-07-12 20:20:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:33.908057 | orchestrator | 2025-07-12 20:20:33 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:33.910889 | orchestrator | 2025-07-12 20:20:33 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:33.910924 | orchestrator | 2025-07-12 20:20:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:36.962235 | orchestrator | 2025-07-12 20:20:36 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:36.962760 | orchestrator | 2025-07-12 20:20:36 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:36.962823 | orchestrator | 2025-07-12 20:20:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:40.003041 | orchestrator | 2025-07-12 20:20:40 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:40.007903 | orchestrator | 2025-07-12 20:20:40 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:40.011052 | orchestrator | 2025-07-12 20:20:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:43.058275 | orchestrator | 2025-07-12 20:20:43 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:43.060060 | orchestrator | 2025-07-12 20:20:43 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:43.060096 | orchestrator | 2025-07-12 20:20:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:46.096352 | orchestrator | 2025-07-12 20:20:46 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:46.096874 | orchestrator | 2025-07-12 20:20:46 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:46.096910 | orchestrator | 2025-07-12 20:20:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:49.135877 | orchestrator | 2025-07-12 20:20:49 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:49.136103 | orchestrator | 2025-07-12 20:20:49 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:49.136124 | orchestrator | 2025-07-12 20:20:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:52.182991 | orchestrator | 2025-07-12 20:20:52 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:52.184566 | orchestrator | 2025-07-12 20:20:52 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:52.184745 | orchestrator | 2025-07-12 20:20:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:55.220517 | orchestrator | 2025-07-12 20:20:55 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:55.220923 | orchestrator | 2025-07-12 20:20:55 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:55.220945 | orchestrator | 2025-07-12 20:20:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:20:58.260048 | orchestrator | 2025-07-12 20:20:58 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:20:58.260723 | orchestrator | 2025-07-12 20:20:58 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:20:58.261007 | orchestrator | 2025-07-12 20:20:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:01.305160 | orchestrator | 2025-07-12 20:21:01 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:01.306631 | orchestrator | 2025-07-12 20:21:01 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:01.306667 | orchestrator | 2025-07-12 20:21:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:04.341565 | orchestrator | 2025-07-12 20:21:04 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:04.343388 | orchestrator | 2025-07-12 20:21:04 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:04.343441 | orchestrator | 2025-07-12 20:21:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:07.386793 | orchestrator | 2025-07-12 20:21:07 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:07.386957 | orchestrator | 2025-07-12 20:21:07 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:07.387447 | orchestrator | 2025-07-12 20:21:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:10.421152 | orchestrator | 2025-07-12 20:21:10 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:10.421684 | orchestrator | 2025-07-12 20:21:10 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:10.421715 | orchestrator | 2025-07-12 20:21:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:13.464244 | orchestrator | 2025-07-12 20:21:13 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:13.467535 | orchestrator | 2025-07-12 20:21:13 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:13.467574 | orchestrator | 2025-07-12 20:21:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:16.521223 | orchestrator | 2025-07-12 20:21:16 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:16.522542 | orchestrator | 2025-07-12 20:21:16 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:16.522624 | orchestrator | 2025-07-12 20:21:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:19.577426 | orchestrator | 2025-07-12 20:21:19 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:19.579336 | orchestrator | 2025-07-12 20:21:19 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:19.580059 | orchestrator | 2025-07-12 20:21:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:22.629636 | orchestrator | 2025-07-12 20:21:22 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:22.630969 | orchestrator | 2025-07-12 20:21:22 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:22.630996 | orchestrator | 2025-07-12 20:21:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:25.675192 | orchestrator | 2025-07-12 20:21:25 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:25.678615 | orchestrator | 2025-07-12 20:21:25 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:25.678994 | orchestrator | 2025-07-12 20:21:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:28.738835 | orchestrator | 2025-07-12 20:21:28 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:28.743295 | orchestrator | 2025-07-12 20:21:28 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:28.743344 | orchestrator | 2025-07-12 20:21:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:31.785430 | orchestrator | 2025-07-12 20:21:31 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:31.787100 | orchestrator | 2025-07-12 20:21:31 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:31.787136 | orchestrator | 2025-07-12 20:21:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:34.830106 | orchestrator | 2025-07-12 20:21:34 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:34.830980 | orchestrator | 2025-07-12 20:21:34 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:34.831010 | orchestrator | 2025-07-12 20:21:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:37.880046 | orchestrator | 2025-07-12 20:21:37 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:37.880159 | orchestrator | 2025-07-12 20:21:37 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:37.880175 | orchestrator | 2025-07-12 20:21:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:40.927139 | orchestrator | 2025-07-12 20:21:40 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:40.929961 | orchestrator | 2025-07-12 20:21:40 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:40.930641 | orchestrator | 2025-07-12 20:21:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:43.973053 | orchestrator | 2025-07-12 20:21:43 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:43.974830 | orchestrator | 2025-07-12 20:21:43 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:43.974869 | orchestrator | 2025-07-12 20:21:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:47.034823 | orchestrator | 2025-07-12 20:21:47 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:47.036960 | orchestrator | 2025-07-12 20:21:47 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:47.036992 | orchestrator | 2025-07-12 20:21:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:50.092623 | orchestrator | 2025-07-12 20:21:50 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:50.094198 | orchestrator | 2025-07-12 20:21:50 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:50.094511 | orchestrator | 2025-07-12 20:21:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:53.146204 | orchestrator | 2025-07-12 20:21:53 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:53.148816 | orchestrator | 2025-07-12 20:21:53 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:53.148907 | orchestrator | 2025-07-12 20:21:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:56.197401 | orchestrator | 2025-07-12 20:21:56 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:56.199682 | orchestrator | 2025-07-12 20:21:56 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:56.199728 | orchestrator | 2025-07-12 20:21:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:21:59.250973 | orchestrator | 2025-07-12 20:21:59 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:21:59.251839 | orchestrator | 2025-07-12 20:21:59 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:21:59.251868 | orchestrator | 2025-07-12 20:21:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:02.291679 | orchestrator | 2025-07-12 20:22:02 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:02.292823 | orchestrator | 2025-07-12 20:22:02 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:02.292867 | orchestrator | 2025-07-12 20:22:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:05.342777 | orchestrator | 2025-07-12 20:22:05 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:05.342854 | orchestrator | 2025-07-12 20:22:05 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:05.342861 | orchestrator | 2025-07-12 20:22:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:08.385614 | orchestrator | 2025-07-12 20:22:08 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:08.387545 | orchestrator | 2025-07-12 20:22:08 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:08.387622 | orchestrator | 2025-07-12 20:22:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:11.423273 | orchestrator | 2025-07-12 20:22:11 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:11.426375 | orchestrator | 2025-07-12 20:22:11 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:11.426431 | orchestrator | 2025-07-12 20:22:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:14.473480 | orchestrator | 2025-07-12 20:22:14 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:14.475684 | orchestrator | 2025-07-12 20:22:14 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:14.475741 | orchestrator | 2025-07-12 20:22:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:17.517986 | orchestrator | 2025-07-12 20:22:17 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:17.518478 | orchestrator | 2025-07-12 20:22:17 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:17.518585 | orchestrator | 2025-07-12 20:22:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:20.572690 | orchestrator | 2025-07-12 20:22:20 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:20.573236 | orchestrator | 2025-07-12 20:22:20 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:20.573354 | orchestrator | 2025-07-12 20:22:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:23.633838 | orchestrator | 2025-07-12 20:22:23 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:23.636705 | orchestrator | 2025-07-12 20:22:23 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:23.636766 | orchestrator | 2025-07-12 20:22:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:26.669426 | orchestrator | 2025-07-12 20:22:26 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:26.674525 | orchestrator | 2025-07-12 20:22:26 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:26.674639 | orchestrator | 2025-07-12 20:22:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:29.706663 | orchestrator | 2025-07-12 20:22:29 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:29.707584 | orchestrator | 2025-07-12 20:22:29 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:29.707707 | orchestrator | 2025-07-12 20:22:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:32.757057 | orchestrator | 2025-07-12 20:22:32 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:32.758868 | orchestrator | 2025-07-12 20:22:32 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:32.759034 | orchestrator | 2025-07-12 20:22:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:35.806142 | orchestrator | 2025-07-12 20:22:35 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:35.808445 | orchestrator | 2025-07-12 20:22:35 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:35.808466 | orchestrator | 2025-07-12 20:22:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:38.846828 | orchestrator | 2025-07-12 20:22:38 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:38.847384 | orchestrator | 2025-07-12 20:22:38 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:38.847398 | orchestrator | 2025-07-12 20:22:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:41.903538 | orchestrator | 2025-07-12 20:22:41 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:41.906554 | orchestrator | 2025-07-12 20:22:41 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:41.906673 | orchestrator | 2025-07-12 20:22:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:44.955144 | orchestrator | 2025-07-12 20:22:44 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:44.958418 | orchestrator | 2025-07-12 20:22:44 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:44.958575 | orchestrator | 2025-07-12 20:22:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:47.996263 | orchestrator | 2025-07-12 20:22:47 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:47.998249 | orchestrator | 2025-07-12 20:22:47 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:47.998324 | orchestrator | 2025-07-12 20:22:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:51.048969 | orchestrator | 2025-07-12 20:22:51 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:51.050349 | orchestrator | 2025-07-12 20:22:51 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state STARTED 2025-07-12 20:22:51.050386 | orchestrator | 2025-07-12 20:22:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:54.088452 | orchestrator | 2025-07-12 20:22:54 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:22:54.092136 | orchestrator | 2025-07-12 20:22:54 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:22:54.092262 | orchestrator | 2025-07-12 20:22:54 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:54.102659 | orchestrator | 2025-07-12 20:22:54 | INFO  | Task 27de5290-50e4-405e-889a-a7b5b87def70 is in state SUCCESS 2025-07-12 20:22:54.103383 | orchestrator | 2025-07-12 20:22:54.103408 | orchestrator | 2025-07-12 20:22:54.103416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:22:54.103424 | orchestrator | 2025-07-12 20:22:54.103431 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:22:54.103517 | orchestrator | Saturday 12 July 2025 20:16:15 +0000 (0:00:00.527) 0:00:00.527 ********* 2025-07-12 20:22:54.103527 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.103535 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.103542 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.103549 | orchestrator | 2025-07-12 20:22:54.103556 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:22:54.103563 | orchestrator | Saturday 12 July 2025 20:16:16 +0000 (0:00:00.669) 0:00:01.196 ********* 2025-07-12 20:22:54.103571 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-07-12 20:22:54.103578 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-07-12 20:22:54.103585 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-07-12 20:22:54.103591 | orchestrator | 2025-07-12 20:22:54.103598 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-07-12 20:22:54.103605 | orchestrator | 2025-07-12 20:22:54.103611 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-12 20:22:54.103618 | orchestrator | Saturday 12 July 2025 20:16:17 +0000 (0:00:00.559) 0:00:01.755 ********* 2025-07-12 20:22:54.103693 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.103701 | orchestrator | 2025-07-12 20:22:54.103708 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-07-12 20:22:54.103714 | orchestrator | Saturday 12 July 2025 20:16:18 +0000 (0:00:00.848) 0:00:02.603 ********* 2025-07-12 20:22:54.103721 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.103728 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.103734 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.103741 | orchestrator | 2025-07-12 20:22:54.103747 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-12 20:22:54.103754 | orchestrator | Saturday 12 July 2025 20:16:18 +0000 (0:00:00.932) 0:00:03.536 ********* 2025-07-12 20:22:54.103760 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.103767 | orchestrator | 2025-07-12 20:22:54.103774 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-07-12 20:22:54.103780 | orchestrator | Saturday 12 July 2025 20:16:20 +0000 (0:00:01.292) 0:00:04.828 ********* 2025-07-12 20:22:54.103798 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.103805 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.103818 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.103825 | orchestrator | 2025-07-12 20:22:54.103832 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-07-12 20:22:54.103838 | orchestrator | Saturday 12 July 2025 20:16:21 +0000 (0:00:01.647) 0:00:06.475 ********* 2025-07-12 20:22:54.103845 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:22:54.103852 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:22:54.103858 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:22:54.103865 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:22:54.103903 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:22:54.103910 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-07-12 20:22:54.103917 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 20:22:54.103924 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 20:22:54.103931 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-07-12 20:22:54.103937 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 20:22:54.103944 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 20:22:54.103950 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-07-12 20:22:54.103957 | orchestrator | 2025-07-12 20:22:54.103963 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 20:22:54.103970 | orchestrator | Saturday 12 July 2025 20:16:25 +0000 (0:00:03.503) 0:00:09.978 ********* 2025-07-12 20:22:54.103977 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-12 20:22:54.103984 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-12 20:22:54.104004 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-12 20:22:54.104011 | orchestrator | 2025-07-12 20:22:54.104019 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 20:22:54.104026 | orchestrator | Saturday 12 July 2025 20:16:26 +0000 (0:00:00.788) 0:00:10.767 ********* 2025-07-12 20:22:54.104034 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-07-12 20:22:54.104042 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-07-12 20:22:54.104056 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-07-12 20:22:54.104064 | orchestrator | 2025-07-12 20:22:54.104071 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 20:22:54.104079 | orchestrator | Saturday 12 July 2025 20:16:28 +0000 (0:00:02.119) 0:00:12.886 ********* 2025-07-12 20:22:54.104086 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-07-12 20:22:54.104094 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.104111 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-07-12 20:22:54.104118 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.104125 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-07-12 20:22:54.104131 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.104138 | orchestrator | 2025-07-12 20:22:54.104144 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-07-12 20:22:54.104151 | orchestrator | Saturday 12 July 2025 20:16:29 +0000 (0:00:01.146) 0:00:14.033 ********* 2025-07-12 20:22:54.104161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104230 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.104238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.104245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.104251 | orchestrator | 2025-07-12 20:22:54.104258 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-07-12 20:22:54.104265 | orchestrator | Saturday 12 July 2025 20:16:31 +0000 (0:00:02.505) 0:00:16.539 ********* 2025-07-12 20:22:54.104272 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.104278 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.104285 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.104291 | orchestrator | 2025-07-12 20:22:54.104298 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-07-12 20:22:54.104305 | orchestrator | Saturday 12 July 2025 20:16:33 +0000 (0:00:01.520) 0:00:18.059 ********* 2025-07-12 20:22:54.104311 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-07-12 20:22:54.104318 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-07-12 20:22:54.104324 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-07-12 20:22:54.104331 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-07-12 20:22:54.104337 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-07-12 20:22:54.104344 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-07-12 20:22:54.104350 | orchestrator | 2025-07-12 20:22:54.104357 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-07-12 20:22:54.104363 | orchestrator | Saturday 12 July 2025 20:16:37 +0000 (0:00:03.783) 0:00:21.842 ********* 2025-07-12 20:22:54.104375 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.104382 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.104388 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.104395 | orchestrator | 2025-07-12 20:22:54.104401 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-07-12 20:22:54.104408 | orchestrator | Saturday 12 July 2025 20:16:40 +0000 (0:00:03.676) 0:00:25.519 ********* 2025-07-12 20:22:54.104414 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.104494 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.104506 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.104517 | orchestrator | 2025-07-12 20:22:54.104528 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-07-12 20:22:54.104539 | orchestrator | Saturday 12 July 2025 20:16:42 +0000 (0:00:01.968) 0:00:27.488 ********* 2025-07-12 20:22:54.104581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.104604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.104611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.104620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:22:54.104649 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.104657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.104671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.104678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.104694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:22:54.104701 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.104708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.104715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.104722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.104734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:22:54.104741 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.104748 | orchestrator | 2025-07-12 20:22:54.104754 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-07-12 20:22:54.104761 | orchestrator | Saturday 12 July 2025 20:16:44 +0000 (0:00:02.026) 0:00:29.514 ********* 2025-07-12 20:22:54.104771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104796 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.104826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:22:54.104834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.104876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:22:54.104884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.104917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:9.2.20250711', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699', '__omit_place_holder__cc7909cf4c23435ba9e4ac47e0f9f86e5d816699'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-07-12 20:22:54.104925 | orchestrator | 2025-07-12 20:22:54.104932 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-07-12 20:22:54.104938 | orchestrator | Saturday 12 July 2025 20:16:48 +0000 (0:00:03.805) 0:00:33.320 ********* 2025-07-12 20:22:54.104945 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.104998 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.105075 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.105084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.105098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.105105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.105112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.105123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.105130 | orchestrator | 2025-07-12 20:22:54.105137 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-07-12 20:22:54.105144 | orchestrator | Saturday 12 July 2025 20:16:52 +0000 (0:00:03.601) 0:00:36.921 ********* 2025-07-12 20:22:54.105151 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 20:22:54.105704 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 20:22:54.105751 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-07-12 20:22:54.105758 | orchestrator | 2025-07-12 20:22:54.105765 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-07-12 20:22:54.105772 | orchestrator | Saturday 12 July 2025 20:16:54 +0000 (0:00:02.238) 0:00:39.160 ********* 2025-07-12 20:22:54.105779 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 20:22:54.105785 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 20:22:54.105792 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-07-12 20:22:54.105808 | orchestrator | 2025-07-12 20:22:54.105815 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-07-12 20:22:54.105821 | orchestrator | Saturday 12 July 2025 20:17:03 +0000 (0:00:09.192) 0:00:48.353 ********* 2025-07-12 20:22:54.105828 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.105835 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.105841 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.105847 | orchestrator | 2025-07-12 20:22:54.105854 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-07-12 20:22:54.105860 | orchestrator | Saturday 12 July 2025 20:17:05 +0000 (0:00:01.502) 0:00:49.855 ********* 2025-07-12 20:22:54.105867 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 20:22:54.105876 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 20:22:54.105882 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-07-12 20:22:54.105889 | orchestrator | 2025-07-12 20:22:54.105895 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-07-12 20:22:54.105902 | orchestrator | Saturday 12 July 2025 20:17:09 +0000 (0:00:04.623) 0:00:54.479 ********* 2025-07-12 20:22:54.105908 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 20:22:54.105915 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 20:22:54.105921 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-07-12 20:22:54.105928 | orchestrator | 2025-07-12 20:22:54.105934 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-07-12 20:22:54.105941 | orchestrator | Saturday 12 July 2025 20:17:14 +0000 (0:00:04.111) 0:00:58.591 ********* 2025-07-12 20:22:54.105947 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-07-12 20:22:54.105954 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-07-12 20:22:54.105960 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-07-12 20:22:54.105975 | orchestrator | 2025-07-12 20:22:54.105981 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-07-12 20:22:54.105994 | orchestrator | Saturday 12 July 2025 20:17:16 +0000 (0:00:02.348) 0:01:00.939 ********* 2025-07-12 20:22:54.106001 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-07-12 20:22:54.106007 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-07-12 20:22:54.106014 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-07-12 20:22:54.106080 | orchestrator | 2025-07-12 20:22:54.106087 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-07-12 20:22:54.106094 | orchestrator | Saturday 12 July 2025 20:17:18 +0000 (0:00:02.570) 0:01:03.510 ********* 2025-07-12 20:22:54.106123 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.106130 | orchestrator | 2025-07-12 20:22:54.106137 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-07-12 20:22:54.106144 | orchestrator | Saturday 12 July 2025 20:17:20 +0000 (0:00:01.226) 0:01:04.736 ********* 2025-07-12 20:22:54.106157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.106196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.106205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.106212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.106220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.106227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.106235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.106256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.106291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.106299 | orchestrator | 2025-07-12 20:22:54.106306 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-07-12 20:22:54.106313 | orchestrator | Saturday 12 July 2025 20:17:24 +0000 (0:00:03.925) 0:01:08.661 ********* 2025-07-12 20:22:54.106319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106326 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106333 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106340 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.106347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106383 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.106389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106410 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.106417 | orchestrator | 2025-07-12 20:22:54.106423 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-07-12 20:22:54.106430 | orchestrator | Saturday 12 July 2025 20:17:25 +0000 (0:00:00.982) 0:01:09.643 ********* 2025-07-12 20:22:54.106437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106518 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106526 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.106533 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106554 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.106560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106604 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.106610 | orchestrator | 2025-07-12 20:22:54.106616 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-12 20:22:54.106622 | orchestrator | Saturday 12 July 2025 20:17:27 +0000 (0:00:02.409) 0:01:12.053 ********* 2025-07-12 20:22:54.106633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106653 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.106659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106688 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.106699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106719 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.106725 | orchestrator | 2025-07-12 20:22:54.106732 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-12 20:22:54.106739 | orchestrator | Saturday 12 July 2025 20:17:28 +0000 (0:00:01.255) 0:01:13.308 ********* 2025-07-12 20:22:54.106745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106756 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106769 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.106779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106803 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.106810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106835 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.106841 | orchestrator | 2025-07-12 20:22:54.106848 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-12 20:22:54.106854 | orchestrator | Saturday 12 July 2025 20:17:29 +0000 (0:00:00.698) 0:01:14.006 ********* 2025-07-12 20:22:54.106864 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106889 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.106895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106920 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.106930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.106953 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.106959 | orchestrator | 2025-07-12 20:22:54.106966 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-07-12 20:22:54.106972 | orchestrator | Saturday 12 July 2025 20:17:30 +0000 (0:00:01.059) 0:01:15.066 ********* 2025-07-12 20:22:54.106978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.106990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.106997 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.107003 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.107010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.107020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.107032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.107038 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.107045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.107058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.107065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.107071 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.107077 | orchestrator | 2025-07-12 20:22:54.107083 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-07-12 20:22:54.107090 | orchestrator | Saturday 12 July 2025 20:17:31 +0000 (0:00:00.603) 0:01:15.669 ********* 2025-07-12 20:22:54.107096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.107114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.107126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.107133 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.107139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.107151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.107157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.107163 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.107170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.107176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.107186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.107193 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.107199 | orchestrator | 2025-07-12 20:22:54.107205 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-07-12 20:22:54.107215 | orchestrator | Saturday 12 July 2025 20:17:31 +0000 (0:00:00.615) 0:01:16.285 ********* 2025-07-12 20:22:54.107222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.107237 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.107243 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.107249 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.107256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.107262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.107281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.107288 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.107298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-07-12 20:22:54.107310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-07-12 20:22:54.107316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-07-12 20:22:54.107323 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.107329 | orchestrator | 2025-07-12 20:22:54.107335 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-07-12 20:22:54.107341 | orchestrator | Saturday 12 July 2025 20:17:32 +0000 (0:00:01.107) 0:01:17.392 ********* 2025-07-12 20:22:54.107347 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 20:22:54.107354 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 20:22:54.107360 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-07-12 20:22:54.107366 | orchestrator | 2025-07-12 20:22:54.107372 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-07-12 20:22:54.107378 | orchestrator | Saturday 12 July 2025 20:17:34 +0000 (0:00:01.413) 0:01:18.806 ********* 2025-07-12 20:22:54.107384 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 20:22:54.107391 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 20:22:54.107397 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-07-12 20:22:54.107403 | orchestrator | 2025-07-12 20:22:54.107409 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-07-12 20:22:54.107416 | orchestrator | Saturday 12 July 2025 20:17:35 +0000 (0:00:01.479) 0:01:20.286 ********* 2025-07-12 20:22:54.107422 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:22:54.107428 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:22:54.107434 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:22:54.107441 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:22:54.107447 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.107453 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:22:54.107472 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.107478 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:22:54.107489 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.107495 | orchestrator | 2025-07-12 20:22:54.107505 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-07-12 20:22:54.107511 | orchestrator | Saturday 12 July 2025 20:17:36 +0000 (0:00:01.249) 0:01:21.535 ********* 2025-07-12 20:22:54.107521 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.107528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.107535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.6.12.20250711', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-07-12 20:22:54.107541 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.107548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.107554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.7.3.20250711', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-07-12 20:22:54.107569 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.107579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.107586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.7.20250711', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-07-12 20:22:54.107592 | orchestrator | 2025-07-12 20:22:54.107598 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-07-12 20:22:54.107605 | orchestrator | Saturday 12 July 2025 20:17:39 +0000 (0:00:02.473) 0:01:24.008 ********* 2025-07-12 20:22:54.107611 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.107617 | orchestrator | 2025-07-12 20:22:54.107623 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-07-12 20:22:54.107629 | orchestrator | Saturday 12 July 2025 20:17:40 +0000 (0:00:00.723) 0:01:24.732 ********* 2025-07-12 20:22:54.107636 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 20:22:54.107644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 20:22:54.107658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.107830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.107840 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.107847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.107853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.107860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.107867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-07-12 20:22:54.107886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.107893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.107900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.107906 | orchestrator | 2025-07-12 20:22:54.107913 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-07-12 20:22:54.107919 | orchestrator | Saturday 12 July 2025 20:17:44 +0000 (0:00:03.977) 0:01:28.710 ********* 2025-07-12 20:22:54.107925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 20:22:54.107932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.107942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.107955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.107961 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.107968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 20:22:54.107975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-07-12 20:22:54.107981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.107992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.107998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.108019 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.108026 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:19.0.0.20250711', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108038 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.108045 | orchestrator | 2025-07-12 20:22:54.108051 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-07-12 20:22:54.108057 | orchestrator | Saturday 12 July 2025 20:17:45 +0000 (0:00:00.870) 0:01:29.580 ********* 2025-07-12 20:22:54.108064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:22:54.108072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:22:54.108078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:22:54.108105 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.108112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:22:54.108118 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.108124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:22:54.108131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-07-12 20:22:54.108137 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.108143 | orchestrator | 2025-07-12 20:22:54.108149 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-07-12 20:22:54.108155 | orchestrator | Saturday 12 July 2025 20:17:46 +0000 (0:00:01.077) 0:01:30.657 ********* 2025-07-12 20:22:54.108161 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.108167 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.108174 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.108180 | orchestrator | 2025-07-12 20:22:54.108186 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-07-12 20:22:54.108192 | orchestrator | Saturday 12 July 2025 20:17:47 +0000 (0:00:01.322) 0:01:31.979 ********* 2025-07-12 20:22:54.108198 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.108204 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.108210 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.108216 | orchestrator | 2025-07-12 20:22:54.108222 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-07-12 20:22:54.108228 | orchestrator | Saturday 12 July 2025 20:17:49 +0000 (0:00:01.900) 0:01:33.880 ********* 2025-07-12 20:22:54.108234 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.108240 | orchestrator | 2025-07-12 20:22:54.108246 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-07-12 20:22:54.108256 | orchestrator | Saturday 12 July 2025 20:17:50 +0000 (0:00:00.679) 0:01:34.560 ********* 2025-07-12 20:22:54.108267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.108275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108293 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.108300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.108392 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108410 | orchestrator | 2025-07-12 20:22:54.108417 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-07-12 20:22:54.108423 | orchestrator | Saturday 12 July 2025 20:17:54 +0000 (0:00:04.529) 0:01:39.089 ********* 2025-07-12 20:22:54.108429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.108439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108482 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.108494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.108512 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108533 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.108548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.108566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.108590 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.108598 | orchestrator | 2025-07-12 20:22:54.108605 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-07-12 20:22:54.108612 | orchestrator | Saturday 12 July 2025 20:17:55 +0000 (0:00:01.094) 0:01:40.183 ********* 2025-07-12 20:22:54.108619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:22:54.108627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:22:54.108635 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.108642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:22:54.108649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:22:54.108656 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.108663 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:22:54.108670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-07-12 20:22:54.108677 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.108684 | orchestrator | 2025-07-12 20:22:54.108691 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-07-12 20:22:54.108698 | orchestrator | Saturday 12 July 2025 20:17:56 +0000 (0:00:00.872) 0:01:41.056 ********* 2025-07-12 20:22:54.108705 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.108712 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.108718 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.108725 | orchestrator | 2025-07-12 20:22:54.108732 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-07-12 20:22:54.108740 | orchestrator | Saturday 12 July 2025 20:17:58 +0000 (0:00:01.811) 0:01:42.868 ********* 2025-07-12 20:22:54.108747 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.108753 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.108760 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.108767 | orchestrator | 2025-07-12 20:22:54.108774 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-07-12 20:22:54.108781 | orchestrator | Saturday 12 July 2025 20:18:00 +0000 (0:00:01.920) 0:01:44.789 ********* 2025-07-12 20:22:54.108788 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.108795 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.108801 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.108808 | orchestrator | 2025-07-12 20:22:54.108815 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-07-12 20:22:54.108822 | orchestrator | Saturday 12 July 2025 20:18:00 +0000 (0:00:00.310) 0:01:45.099 ********* 2025-07-12 20:22:54.108829 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.108836 | orchestrator | 2025-07-12 20:22:54.108843 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-07-12 20:22:54.108859 | orchestrator | Saturday 12 July 2025 20:18:01 +0000 (0:00:00.672) 0:01:45.771 ********* 2025-07-12 20:22:54.109017 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 20:22:54.109028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 20:22:54.109035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-07-12 20:22:54.109041 | orchestrator | 2025-07-12 20:22:54.109048 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-07-12 20:22:54.109054 | orchestrator | Saturday 12 July 2025 20:18:04 +0000 (0:00:03.000) 0:01:48.772 ********* 2025-07-12 20:22:54.109060 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 20:22:54.109067 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.109081 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 20:22:54.109093 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.109100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-07-12 20:22:54.109106 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.109113 | orchestrator | 2025-07-12 20:22:54.109119 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-07-12 20:22:54.109125 | orchestrator | Saturday 12 July 2025 20:18:05 +0000 (0:00:01.561) 0:01:50.334 ********* 2025-07-12 20:22:54.109139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:22:54.109147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:22:54.109154 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.109161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:22:54.109167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:22:54.109174 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.109180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:22:54.109191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-07-12 20:22:54.109197 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.109203 | orchestrator | 2025-07-12 20:22:54.109229 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-07-12 20:22:54.109236 | orchestrator | Saturday 12 July 2025 20:18:07 +0000 (0:00:01.968) 0:01:52.303 ********* 2025-07-12 20:22:54.109243 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.109249 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.109255 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.109261 | orchestrator | 2025-07-12 20:22:54.109270 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-07-12 20:22:54.109277 | orchestrator | Saturday 12 July 2025 20:18:08 +0000 (0:00:00.771) 0:01:53.074 ********* 2025-07-12 20:22:54.109283 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.109289 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.109295 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.109301 | orchestrator | 2025-07-12 20:22:54.109307 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-07-12 20:22:54.109313 | orchestrator | Saturday 12 July 2025 20:18:09 +0000 (0:00:01.140) 0:01:54.214 ********* 2025-07-12 20:22:54.109319 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.109326 | orchestrator | 2025-07-12 20:22:54.109332 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-07-12 20:22:54.109338 | orchestrator | Saturday 12 July 2025 20:18:10 +0000 (0:00:00.714) 0:01:54.929 ********* 2025-07-12 20:22:54.109344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.109351 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109358 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.109405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.109444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109481 | orchestrator | 2025-07-12 20:22:54.109487 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-07-12 20:22:54.109493 | orchestrator | Saturday 12 July 2025 20:18:14 +0000 (0:00:03.747) 0:01:58.677 ********* 2025-07-12 20:22:54.109500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.109520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109548 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.109555 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.109566 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109583 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109589 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.109599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.109606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.109632 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.109638 | orchestrator | 2025-07-12 20:22:54.109644 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-07-12 20:22:54.109651 | orchestrator | Saturday 12 July 2025 20:18:15 +0000 (0:00:01.358) 0:02:00.035 ********* 2025-07-12 20:22:54.109658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:22:54.109666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:22:54.109673 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.109684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:22:54.109695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:22:54.109703 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.109709 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:22:54.109717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-07-12 20:22:54.109724 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.109730 | orchestrator | 2025-07-12 20:22:54.109738 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-07-12 20:22:54.109745 | orchestrator | Saturday 12 July 2025 20:18:16 +0000 (0:00:00.959) 0:02:00.994 ********* 2025-07-12 20:22:54.109752 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.109758 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.109765 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.109772 | orchestrator | 2025-07-12 20:22:54.109779 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-07-12 20:22:54.109791 | orchestrator | Saturday 12 July 2025 20:18:17 +0000 (0:00:01.315) 0:02:02.309 ********* 2025-07-12 20:22:54.109836 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.109843 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.109849 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.109856 | orchestrator | 2025-07-12 20:22:54.109863 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-07-12 20:22:54.109870 | orchestrator | Saturday 12 July 2025 20:18:19 +0000 (0:00:02.206) 0:02:04.515 ********* 2025-07-12 20:22:54.109877 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.109883 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.109890 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.109897 | orchestrator | 2025-07-12 20:22:54.109904 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-07-12 20:22:54.109911 | orchestrator | Saturday 12 July 2025 20:18:20 +0000 (0:00:00.540) 0:02:05.056 ********* 2025-07-12 20:22:54.109918 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.109925 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.109932 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.109939 | orchestrator | 2025-07-12 20:22:54.109946 | orchestrator | TASK [include_role : designate] ************************************************ 2025-07-12 20:22:54.109953 | orchestrator | Saturday 12 July 2025 20:18:20 +0000 (0:00:00.299) 0:02:05.355 ********* 2025-07-12 20:22:54.109959 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.109966 | orchestrator | 2025-07-12 20:22:54.109972 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-07-12 20:22:54.109978 | orchestrator | Saturday 12 July 2025 20:18:21 +0000 (0:00:00.782) 0:02:06.138 ********* 2025-07-12 20:22:54.109984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:22:54.109991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:22:54.110899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.110931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:22:54.110949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.110957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:22:54.110966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.110975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.110997 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111020 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111061 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:22:54.111080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:22:54.111088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111129 | orchestrator | 2025-07-12 20:22:54.111137 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-07-12 20:22:54.111146 | orchestrator | Saturday 12 July 2025 20:18:26 +0000 (0:00:04.887) 0:02:11.025 ********* 2025-07-12 20:22:54.111171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:22:54.111180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:22:54.111189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111198 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:22:54.111220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:22:54.111248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111301 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.111307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111337 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.111342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:22:54.111348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:22:54.111353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111382 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.111394 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.111399 | orchestrator | 2025-07-12 20:22:54.111404 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-07-12 20:22:54.111417 | orchestrator | Saturday 12 July 2025 20:18:27 +0000 (0:00:00.868) 0:02:11.893 ********* 2025-07-12 20:22:54.111423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:22:54.111430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:22:54.111436 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.111442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:22:54.111447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:22:54.111493 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.111500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:22:54.111506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-07-12 20:22:54.111516 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.111522 | orchestrator | 2025-07-12 20:22:54.111528 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-07-12 20:22:54.111535 | orchestrator | Saturday 12 July 2025 20:18:28 +0000 (0:00:01.021) 0:02:12.915 ********* 2025-07-12 20:22:54.111541 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.111547 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.111553 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.111559 | orchestrator | 2025-07-12 20:22:54.111565 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-07-12 20:22:54.111571 | orchestrator | Saturday 12 July 2025 20:18:29 +0000 (0:00:01.601) 0:02:14.517 ********* 2025-07-12 20:22:54.111577 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.111583 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.111589 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.111594 | orchestrator | 2025-07-12 20:22:54.111600 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-07-12 20:22:54.111606 | orchestrator | Saturday 12 July 2025 20:18:31 +0000 (0:00:01.827) 0:02:16.344 ********* 2025-07-12 20:22:54.111613 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.111619 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.111624 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.111631 | orchestrator | 2025-07-12 20:22:54.111637 | orchestrator | TASK [include_role : glance] *************************************************** 2025-07-12 20:22:54.111643 | orchestrator | Saturday 12 July 2025 20:18:32 +0000 (0:00:00.292) 0:02:16.637 ********* 2025-07-12 20:22:54.111653 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.111659 | orchestrator | 2025-07-12 20:22:54.111664 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-07-12 20:22:54.111669 | orchestrator | Saturday 12 July 2025 20:18:32 +0000 (0:00:00.759) 0:02:17.397 ********* 2025-07-12 20:22:54.111682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:22:54.111690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.111891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:22:54.111903 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:22:54.111945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.111952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.111963 | orchestrator | 2025-07-12 20:22:54.111968 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-07-12 20:22:54.111974 | orchestrator | Saturday 12 July 2025 20:18:36 +0000 (0:00:03.904) 0:02:21.301 ********* 2025-07-12 20:22:54.111995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:22:54.112003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.112013 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.112034 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:22:54.112047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.112057 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.112063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:22:54.112077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:29.0.1.20250711', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.112088 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.112094 | orchestrator | 2025-07-12 20:22:54.112099 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-07-12 20:22:54.112105 | orchestrator | Saturday 12 July 2025 20:18:39 +0000 (0:00:02.768) 0:02:24.069 ********* 2025-07-12 20:22:54.112110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:22:54.112116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:22:54.112122 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.112128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:22:54.112133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:22:54.112142 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.112151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:22:54.112157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-07-12 20:22:54.112163 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.112168 | orchestrator | 2025-07-12 20:22:54.112173 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-07-12 20:22:54.112179 | orchestrator | Saturday 12 July 2025 20:18:42 +0000 (0:00:03.080) 0:02:27.150 ********* 2025-07-12 20:22:54.112184 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.112193 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.112199 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.112204 | orchestrator | 2025-07-12 20:22:54.112210 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-07-12 20:22:54.112215 | orchestrator | Saturday 12 July 2025 20:18:44 +0000 (0:00:01.488) 0:02:28.639 ********* 2025-07-12 20:22:54.112220 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.112226 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.112231 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.112236 | orchestrator | 2025-07-12 20:22:54.112242 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-07-12 20:22:54.112247 | orchestrator | Saturday 12 July 2025 20:18:45 +0000 (0:00:01.876) 0:02:30.515 ********* 2025-07-12 20:22:54.112252 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.112258 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.112263 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.112269 | orchestrator | 2025-07-12 20:22:54.112274 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-07-12 20:22:54.112279 | orchestrator | Saturday 12 July 2025 20:18:46 +0000 (0:00:00.288) 0:02:30.804 ********* 2025-07-12 20:22:54.112285 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.112290 | orchestrator | 2025-07-12 20:22:54.112295 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-07-12 20:22:54.112301 | orchestrator | Saturday 12 July 2025 20:18:47 +0000 (0:00:00.845) 0:02:31.650 ********* 2025-07-12 20:22:54.112306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:22:54.112313 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:22:54.112322 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:22:54.112328 | orchestrator | 2025-07-12 20:22:54.112335 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-07-12 20:22:54.112341 | orchestrator | Saturday 12 July 2025 20:18:50 +0000 (0:00:03.435) 0:02:35.085 ********* 2025-07-12 20:22:54.112347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:22:54.112357 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.112363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:22:54.112369 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.112374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:22:54.112380 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.112385 | orchestrator | 2025-07-12 20:22:54.112391 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-07-12 20:22:54.112396 | orchestrator | Saturday 12 July 2025 20:18:50 +0000 (0:00:00.424) 0:02:35.510 ********* 2025-07-12 20:22:54.112401 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:22:54.112407 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:22:54.112413 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.112418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:22:54.112423 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:22:54.112541 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.112548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:22:54.112554 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-07-12 20:22:54.112570 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.112577 | orchestrator | 2025-07-12 20:22:54.112583 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-07-12 20:22:54.112590 | orchestrator | Saturday 12 July 2025 20:18:51 +0000 (0:00:00.747) 0:02:36.257 ********* 2025-07-12 20:22:54.112596 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.112602 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.112608 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.112614 | orchestrator | 2025-07-12 20:22:54.112624 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-07-12 20:22:54.112631 | orchestrator | Saturday 12 July 2025 20:18:53 +0000 (0:00:01.566) 0:02:37.824 ********* 2025-07-12 20:22:54.112637 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.112643 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.112649 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.112655 | orchestrator | 2025-07-12 20:22:54.112661 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-07-12 20:22:54.112667 | orchestrator | Saturday 12 July 2025 20:18:55 +0000 (0:00:02.132) 0:02:39.957 ********* 2025-07-12 20:22:54.112674 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.112679 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.112685 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.112692 | orchestrator | 2025-07-12 20:22:54.112698 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-07-12 20:22:54.112704 | orchestrator | Saturday 12 July 2025 20:18:55 +0000 (0:00:00.354) 0:02:40.312 ********* 2025-07-12 20:22:54.112710 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.112716 | orchestrator | 2025-07-12 20:22:54.112722 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-07-12 20:22:54.112729 | orchestrator | Saturday 12 July 2025 20:18:56 +0000 (0:00:00.882) 0:02:41.195 ********* 2025-07-12 20:22:54.112736 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:22:54.113888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:22:54.113919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:22:54.113932 | orchestrator | 2025-07-12 20:22:54.113938 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-07-12 20:22:54.113943 | orchestrator | Saturday 12 July 2025 20:18:59 +0000 (0:00:03.168) 0:02:44.363 ********* 2025-07-12 20:22:54.113958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:22:54.113965 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.113974 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:22:54.113984 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.113994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:22:54.114001 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.114006 | orchestrator | 2025-07-12 20:22:54.114011 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-07-12 20:22:54.114059 | orchestrator | Saturday 12 July 2025 20:19:00 +0000 (0:00:00.569) 0:02:44.933 ********* 2025-07-12 20:22:54.114066 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:22:54.114074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:22:54.114085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:22:54.114091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:22:54.114098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 20:22:54.114107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:22:54.114112 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.114122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:22:54.114128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:22:54.114133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:22:54.114139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:22:54.114144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:22:54.114150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-07-12 20:22:54.114155 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 20:22:54.114161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-07-12 20:22:54.114169 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.114175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-07-12 20:22:54.114180 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.114185 | orchestrator | 2025-07-12 20:22:54.114191 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-07-12 20:22:54.114196 | orchestrator | Saturday 12 July 2025 20:19:01 +0000 (0:00:00.872) 0:02:45.805 ********* 2025-07-12 20:22:54.114201 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.114207 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.114212 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.114217 | orchestrator | 2025-07-12 20:22:54.114223 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-07-12 20:22:54.114228 | orchestrator | Saturday 12 July 2025 20:19:02 +0000 (0:00:01.397) 0:02:47.203 ********* 2025-07-12 20:22:54.114233 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.114238 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.114244 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.114249 | orchestrator | 2025-07-12 20:22:54.114254 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-07-12 20:22:54.114259 | orchestrator | Saturday 12 July 2025 20:19:04 +0000 (0:00:02.102) 0:02:49.305 ********* 2025-07-12 20:22:54.114265 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.114270 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.114275 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.114280 | orchestrator | 2025-07-12 20:22:54.114286 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-07-12 20:22:54.114291 | orchestrator | Saturday 12 July 2025 20:19:05 +0000 (0:00:00.489) 0:02:49.795 ********* 2025-07-12 20:22:54.114296 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.114302 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.114307 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.114312 | orchestrator | 2025-07-12 20:22:54.114320 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-07-12 20:22:54.114325 | orchestrator | Saturday 12 July 2025 20:19:05 +0000 (0:00:00.426) 0:02:50.222 ********* 2025-07-12 20:22:54.114330 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.114336 | orchestrator | 2025-07-12 20:22:54.114341 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-07-12 20:22:54.114349 | orchestrator | Saturday 12 July 2025 20:19:06 +0000 (0:00:01.327) 0:02:51.549 ********* 2025-07-12 20:22:54.114355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:22:54.114368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:22:54.114375 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:22:54.114381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:22:54.114390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:22:54.114399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:22:54.114405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:22:54.114415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:22:54.114420 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:22:54.114426 | orchestrator | 2025-07-12 20:22:54.114431 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-07-12 20:22:54.114437 | orchestrator | Saturday 12 July 2025 20:19:12 +0000 (0:00:05.190) 0:02:56.740 ********* 2025-07-12 20:22:54.114445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:22:54.114455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:22:54.114487 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:22:54.114498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:22:54.114505 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.114511 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:22:54.114517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:22:54.114523 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.114538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:22:54.114545 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:22:54.114557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:22:54.114563 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.114569 | orchestrator | 2025-07-12 20:22:54.114575 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-07-12 20:22:54.114581 | orchestrator | Saturday 12 July 2025 20:19:12 +0000 (0:00:00.760) 0:02:57.501 ********* 2025-07-12 20:22:54.114587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:22:54.114595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:22:54.114601 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.114607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:22:54.114613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:22:54.114619 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.114625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:22:54.114631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-07-12 20:22:54.114637 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.114643 | orchestrator | 2025-07-12 20:22:54.114652 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-07-12 20:22:54.114658 | orchestrator | Saturday 12 July 2025 20:19:14 +0000 (0:00:01.406) 0:02:58.908 ********* 2025-07-12 20:22:54.114664 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.114670 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.114676 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.114682 | orchestrator | 2025-07-12 20:22:54.114688 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-07-12 20:22:54.114998 | orchestrator | Saturday 12 July 2025 20:19:16 +0000 (0:00:01.652) 0:03:00.560 ********* 2025-07-12 20:22:54.115010 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.115016 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.115022 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.115027 | orchestrator | 2025-07-12 20:22:54.115032 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-07-12 20:22:54.115038 | orchestrator | Saturday 12 July 2025 20:19:18 +0000 (0:00:02.180) 0:03:02.741 ********* 2025-07-12 20:22:54.115043 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.115049 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.115054 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.115059 | orchestrator | 2025-07-12 20:22:54.115065 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-07-12 20:22:54.115070 | orchestrator | Saturday 12 July 2025 20:19:18 +0000 (0:00:00.328) 0:03:03.070 ********* 2025-07-12 20:22:54.115075 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.115081 | orchestrator | 2025-07-12 20:22:54.115086 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-07-12 20:22:54.115091 | orchestrator | Saturday 12 July 2025 20:19:19 +0000 (0:00:01.227) 0:03:04.297 ********* 2025-07-12 20:22:54.115098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:22:54.115105 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:22:54.115111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:22:54.115143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115149 | orchestrator | 2025-07-12 20:22:54.115154 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-07-12 20:22:54.115160 | orchestrator | Saturday 12 July 2025 20:19:23 +0000 (0:00:03.793) 0:03:08.091 ********* 2025-07-12 20:22:54.115166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:22:54.115171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115181 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.115193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:22:54.115200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115205 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.115211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:22:54.115216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115222 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.115231 | orchestrator | 2025-07-12 20:22:54.115236 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-07-12 20:22:54.115258 | orchestrator | Saturday 12 July 2025 20:19:24 +0000 (0:00:00.661) 0:03:08.752 ********* 2025-07-12 20:22:54.115264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:22:54.115270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:22:54.115276 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.115281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:22:54.115290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:22:54.115296 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.115304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:22:54.115310 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-07-12 20:22:54.115315 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.115321 | orchestrator | 2025-07-12 20:22:54.115326 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-07-12 20:22:54.115331 | orchestrator | Saturday 12 July 2025 20:19:25 +0000 (0:00:01.216) 0:03:09.968 ********* 2025-07-12 20:22:54.115337 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.115342 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.115347 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.115352 | orchestrator | 2025-07-12 20:22:54.115358 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-07-12 20:22:54.115405 | orchestrator | Saturday 12 July 2025 20:19:26 +0000 (0:00:01.370) 0:03:11.339 ********* 2025-07-12 20:22:54.115412 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.115418 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.115423 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.115513 | orchestrator | 2025-07-12 20:22:54.115520 | orchestrator | TASK [include_role : manila] *************************************************** 2025-07-12 20:22:54.115526 | orchestrator | Saturday 12 July 2025 20:19:28 +0000 (0:00:02.147) 0:03:13.487 ********* 2025-07-12 20:22:54.115531 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.115536 | orchestrator | 2025-07-12 20:22:54.115542 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-07-12 20:22:54.115547 | orchestrator | Saturday 12 July 2025 20:19:30 +0000 (0:00:01.404) 0:03:14.891 ********* 2025-07-12 20:22:54.115553 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 20:22:54.115565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 20:22:54.115584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115654 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-07-12 20:22:54.115926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115941 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115946 | orchestrator | 2025-07-12 20:22:54.115952 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-07-12 20:22:54.115957 | orchestrator | Saturday 12 July 2025 20:19:34 +0000 (0:00:04.583) 0:03:19.474 ********* 2025-07-12 20:22:54.115963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 20:22:54.115975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.115980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.116020 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.116026 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.116032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 20:22:54.116038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.116073 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.116080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.116086 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.116095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-07-12 20:22:54.116105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:19.0.2.20250711', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.116110 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.116115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:19.0.2.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.116124 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.116129 | orchestrator | 2025-07-12 20:22:54.116134 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-07-12 20:22:54.116139 | orchestrator | Saturday 12 July 2025 20:19:36 +0000 (0:00:01.123) 0:03:20.598 ********* 2025-07-12 20:22:54.116144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:22:54.116149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:22:54.116154 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.116159 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:22:54.116163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:22:54.116168 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.116173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:22:54.116178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-07-12 20:22:54.116183 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.116187 | orchestrator | 2025-07-12 20:22:54.116192 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-07-12 20:22:54.116197 | orchestrator | Saturday 12 July 2025 20:19:37 +0000 (0:00:01.665) 0:03:22.264 ********* 2025-07-12 20:22:54.116202 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.116207 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.116211 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.116216 | orchestrator | 2025-07-12 20:22:54.116238 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-07-12 20:22:54.116243 | orchestrator | Saturday 12 July 2025 20:19:39 +0000 (0:00:01.403) 0:03:23.667 ********* 2025-07-12 20:22:54.116248 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.116253 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.116257 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.116262 | orchestrator | 2025-07-12 20:22:54.116270 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-07-12 20:22:54.116274 | orchestrator | Saturday 12 July 2025 20:19:41 +0000 (0:00:02.327) 0:03:25.994 ********* 2025-07-12 20:22:54.116279 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.116284 | orchestrator | 2025-07-12 20:22:54.116314 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-07-12 20:22:54.116322 | orchestrator | Saturday 12 July 2025 20:19:42 +0000 (0:00:01.185) 0:03:27.180 ********* 2025-07-12 20:22:54.116328 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:22:54.116332 | orchestrator | 2025-07-12 20:22:54.116337 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-07-12 20:22:54.116342 | orchestrator | Saturday 12 July 2025 20:19:45 +0000 (0:00:03.351) 0:03:30.532 ********* 2025-07-12 20:22:54.116352 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:22:54.116359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:22:54.116364 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.116375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:22:54.116485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:22:54.116494 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.116499 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:22:54.116506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:22:54.116540 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.116546 | orchestrator | 2025-07-12 20:22:54.116551 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-07-12 20:22:54.116560 | orchestrator | Saturday 12 July 2025 20:19:48 +0000 (0:00:02.114) 0:03:32.646 ********* 2025-07-12 20:22:54.116572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:22:54.116582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:22:54.116588 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.116595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:22:54.116608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:22:54.116614 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.116633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:22:54.116640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-07-12 20:22:54.116645 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.116651 | orchestrator | 2025-07-12 20:22:54.116656 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-07-12 20:22:54.116662 | orchestrator | Saturday 12 July 2025 20:19:50 +0000 (0:00:02.328) 0:03:34.975 ********* 2025-07-12 20:22:54.116670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:22:54.116683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:22:54.116689 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.116694 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:22:54.116700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:22:54.116705 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.116711 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:22:54.116717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-07-12 20:22:54.116722 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.116728 | orchestrator | 2025-07-12 20:22:54.116733 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-07-12 20:22:54.116738 | orchestrator | Saturday 12 July 2025 20:19:53 +0000 (0:00:02.805) 0:03:37.781 ********* 2025-07-12 20:22:54.116744 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.116749 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.116755 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.116764 | orchestrator | 2025-07-12 20:22:54.116769 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-07-12 20:22:54.116775 | orchestrator | Saturday 12 July 2025 20:19:55 +0000 (0:00:01.844) 0:03:39.626 ********* 2025-07-12 20:22:54.116781 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.116786 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.116791 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.116796 | orchestrator | 2025-07-12 20:22:54.116802 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-07-12 20:22:54.116807 | orchestrator | Saturday 12 July 2025 20:19:56 +0000 (0:00:01.486) 0:03:41.113 ********* 2025-07-12 20:22:54.116813 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.116818 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.116823 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.116829 | orchestrator | 2025-07-12 20:22:54.116834 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-07-12 20:22:54.116844 | orchestrator | Saturday 12 July 2025 20:19:56 +0000 (0:00:00.318) 0:03:41.431 ********* 2025-07-12 20:22:54.116850 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.116856 | orchestrator | 2025-07-12 20:22:54.116861 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-07-12 20:22:54.116870 | orchestrator | Saturday 12 July 2025 20:19:58 +0000 (0:00:01.329) 0:03:42.761 ********* 2025-07-12 20:22:54.116876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 20:22:54.116882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 20:22:54.116888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-07-12 20:22:54.116894 | orchestrator | 2025-07-12 20:22:54.116899 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-07-12 20:22:54.116905 | orchestrator | Saturday 12 July 2025 20:19:59 +0000 (0:00:01.514) 0:03:44.275 ********* 2025-07-12 20:22:54.116914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 20:22:54.116919 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.116927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 20:22:54.116932 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.116940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.18.20250711', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-07-12 20:22:54.116945 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.116950 | orchestrator | 2025-07-12 20:22:54.116955 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-07-12 20:22:54.116959 | orchestrator | Saturday 12 July 2025 20:20:00 +0000 (0:00:00.409) 0:03:44.684 ********* 2025-07-12 20:22:54.116965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 20:22:54.116970 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.116975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 20:22:54.116980 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.117168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-07-12 20:22:54.117179 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.117184 | orchestrator | 2025-07-12 20:22:54.117189 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-07-12 20:22:54.117198 | orchestrator | Saturday 12 July 2025 20:20:00 +0000 (0:00:00.866) 0:03:45.551 ********* 2025-07-12 20:22:54.117203 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.117208 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.117213 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.117217 | orchestrator | 2025-07-12 20:22:54.117222 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-07-12 20:22:54.117227 | orchestrator | Saturday 12 July 2025 20:20:01 +0000 (0:00:00.469) 0:03:46.021 ********* 2025-07-12 20:22:54.117231 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.117236 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.117241 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.117246 | orchestrator | 2025-07-12 20:22:54.117250 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-07-12 20:22:54.117255 | orchestrator | Saturday 12 July 2025 20:20:02 +0000 (0:00:01.326) 0:03:47.347 ********* 2025-07-12 20:22:54.117260 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.117265 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.117269 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.117274 | orchestrator | 2025-07-12 20:22:54.117279 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-07-12 20:22:54.117283 | orchestrator | Saturday 12 July 2025 20:20:03 +0000 (0:00:00.343) 0:03:47.691 ********* 2025-07-12 20:22:54.117288 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.117293 | orchestrator | 2025-07-12 20:22:54.117298 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-07-12 20:22:54.117302 | orchestrator | Saturday 12 July 2025 20:20:04 +0000 (0:00:01.514) 0:03:49.205 ********* 2025-07-12 20:22:54.117311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:22:54.117320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:22:54.117345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.117381 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117401 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:22:54.117407 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.117421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117426 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.117436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:22:54.117477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.117513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:22:54.117518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.117575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:22:54.117584 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.117589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117624 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.117662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117668 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.117678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117689 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.117698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.117703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117708 | orchestrator | 2025-07-12 20:22:54.117713 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-07-12 20:22:54.117718 | orchestrator | Saturday 12 July 2025 20:20:09 +0000 (0:00:04.819) 0:03:54.025 ********* 2025-07-12 20:22:54.117723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:22:54.117731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.117963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:22:54.118014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118078 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.118084 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:22:54.118101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.118160 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:22:54.118165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.118170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118180 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118185 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.118191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.118219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.118255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:22:54.118260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:25.2.1.20250711', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118270 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118283 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-07-12 20:22:54.118290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:25.2.1.20250711', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.118321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118338 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-07-12 20:22:54.118343 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.118348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:25.2.1.20250711', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118411 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-07-12 20:22:54.118418 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.118430 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:25.2.1.20250711', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions':2025-07-12 20:22:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:22:54.118741 | orchestrator | {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:22:54.118812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/kolla/release/neutron-ovn-vpn-agent:25.2.1.20250711', 'privileged': True, 'enabled': False, 'group': 'neutron-ovn-vpn-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.118834 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.118839 | orchestrator | 2025-07-12 20:22:54.118844 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-07-12 20:22:54.118850 | orchestrator | Saturday 12 July 2025 20:20:11 +0000 (0:00:02.045) 0:03:56.071 ********* 2025-07-12 20:22:54.118870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:22:54.118876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:22:54.118882 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.118887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:22:54.118892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:22:54.118922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:22:54.118927 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.118932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-07-12 20:22:54.118936 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.118941 | orchestrator | 2025-07-12 20:22:54.118946 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-07-12 20:22:54.118950 | orchestrator | Saturday 12 July 2025 20:20:13 +0000 (0:00:02.098) 0:03:58.169 ********* 2025-07-12 20:22:54.118955 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.118959 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.118964 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.118968 | orchestrator | 2025-07-12 20:22:54.118973 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-07-12 20:22:54.118977 | orchestrator | Saturday 12 July 2025 20:20:14 +0000 (0:00:01.203) 0:03:59.372 ********* 2025-07-12 20:22:54.118982 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.118986 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.118991 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.118995 | orchestrator | 2025-07-12 20:22:54.119000 | orchestrator | TASK [include_role : placement] ************************************************ 2025-07-12 20:22:54.119004 | orchestrator | Saturday 12 July 2025 20:20:16 +0000 (0:00:02.081) 0:04:01.454 ********* 2025-07-12 20:22:54.119009 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.119013 | orchestrator | 2025-07-12 20:22:54.119018 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-07-12 20:22:54.119022 | orchestrator | Saturday 12 July 2025 20:20:18 +0000 (0:00:01.325) 0:04:02.779 ********* 2025-07-12 20:22:54.119050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.119057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.119086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.119092 | orchestrator | 2025-07-12 20:22:54.119096 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-07-12 20:22:54.119101 | orchestrator | Saturday 12 July 2025 20:20:21 +0000 (0:00:02.982) 0:04:05.762 ********* 2025-07-12 20:22:54.119105 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.119110 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.119121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.119127 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.119131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.119141 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.119145 | orchestrator | 2025-07-12 20:22:54.119150 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-07-12 20:22:54.119154 | orchestrator | Saturday 12 July 2025 20:20:21 +0000 (0:00:00.654) 0:04:06.416 ********* 2025-07-12 20:22:54.119159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119169 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.119174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119183 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.119188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119192 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119197 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.119201 | orchestrator | 2025-07-12 20:22:54.119206 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-07-12 20:22:54.119210 | orchestrator | Saturday 12 July 2025 20:20:22 +0000 (0:00:01.006) 0:04:07.422 ********* 2025-07-12 20:22:54.119215 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.119219 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.119224 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.119228 | orchestrator | 2025-07-12 20:22:54.119233 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-07-12 20:22:54.119237 | orchestrator | Saturday 12 July 2025 20:20:24 +0000 (0:00:01.358) 0:04:08.780 ********* 2025-07-12 20:22:54.119242 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.119246 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.119251 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.119276 | orchestrator | 2025-07-12 20:22:54.119281 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-07-12 20:22:54.119285 | orchestrator | Saturday 12 July 2025 20:20:26 +0000 (0:00:02.134) 0:04:10.915 ********* 2025-07-12 20:22:54.119290 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.119294 | orchestrator | 2025-07-12 20:22:54.119299 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-07-12 20:22:54.119303 | orchestrator | Saturday 12 July 2025 20:20:27 +0000 (0:00:01.288) 0:04:12.204 ********* 2025-07-12 20:22:54.119385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.119519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.119542 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.119560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119575 | orchestrator | 2025-07-12 20:22:54.119579 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-07-12 20:22:54.119584 | orchestrator | Saturday 12 July 2025 20:20:32 +0000 (0:00:04.543) 0:04:16.747 ********* 2025-07-12 20:22:54.119607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.119618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119628 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.119633 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.119638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.119663 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.119668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.119677 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.119682 | orchestrator | 2025-07-12 20:22:54.119686 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-07-12 20:22:54.119691 | orchestrator | Saturday 12 July 2025 20:20:32 +0000 (0:00:00.698) 0:04:17.446 ********* 2025-07-12 20:22:54.119696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119706 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119725 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.119730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119819 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.119823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-07-12 20:22:54.119842 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.119847 | orchestrator | 2025-07-12 20:22:54.119851 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-07-12 20:22:54.119856 | orchestrator | Saturday 12 July 2025 20:20:33 +0000 (0:00:00.927) 0:04:18.374 ********* 2025-07-12 20:22:54.119860 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.119865 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.119869 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.119874 | orchestrator | 2025-07-12 20:22:54.119878 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-07-12 20:22:54.119883 | orchestrator | Saturday 12 July 2025 20:20:35 +0000 (0:00:01.684) 0:04:20.058 ********* 2025-07-12 20:22:54.119887 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.119892 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.119896 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.119901 | orchestrator | 2025-07-12 20:22:54.119905 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-07-12 20:22:54.119910 | orchestrator | Saturday 12 July 2025 20:20:37 +0000 (0:00:01.928) 0:04:21.987 ********* 2025-07-12 20:22:54.119915 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.119919 | orchestrator | 2025-07-12 20:22:54.119940 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-07-12 20:22:54.119946 | orchestrator | Saturday 12 July 2025 20:20:38 +0000 (0:00:01.352) 0:04:23.339 ********* 2025-07-12 20:22:54.119950 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-07-12 20:22:54.119955 | orchestrator | 2025-07-12 20:22:54.119964 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-07-12 20:22:54.119969 | orchestrator | Saturday 12 July 2025 20:20:40 +0000 (0:00:01.562) 0:04:24.902 ********* 2025-07-12 20:22:54.119974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 20:22:54.119980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 20:22:54.119988 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-07-12 20:22:54.119992 | orchestrator | 2025-07-12 20:22:54.120012 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-07-12 20:22:54.120255 | orchestrator | Saturday 12 July 2025 20:20:44 +0000 (0:00:04.281) 0:04:29.183 ********* 2025-07-12 20:22:54.120265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:22:54.120271 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.120276 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:22:54.120281 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.120285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:22:54.120290 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.120295 | orchestrator | 2025-07-12 20:22:54.120299 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-07-12 20:22:54.120304 | orchestrator | Saturday 12 July 2025 20:20:46 +0000 (0:00:01.580) 0:04:30.763 ********* 2025-07-12 20:22:54.120314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:22:54.120319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:22:54.120324 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.120329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:22:54.120334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:22:54.120338 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.120343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:22:54.120348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-07-12 20:22:54.120353 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.120357 | orchestrator | 2025-07-12 20:22:54.120362 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 20:22:54.120366 | orchestrator | Saturday 12 July 2025 20:20:48 +0000 (0:00:02.049) 0:04:32.813 ********* 2025-07-12 20:22:54.120631 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.120642 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.120651 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.120656 | orchestrator | 2025-07-12 20:22:54.120660 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 20:22:54.120665 | orchestrator | Saturday 12 July 2025 20:20:50 +0000 (0:00:02.442) 0:04:35.255 ********* 2025-07-12 20:22:54.120669 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.120673 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.120677 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.120681 | orchestrator | 2025-07-12 20:22:54.120685 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-07-12 20:22:54.120693 | orchestrator | Saturday 12 July 2025 20:20:53 +0000 (0:00:03.116) 0:04:38.371 ********* 2025-07-12 20:22:54.120698 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-07-12 20:22:54.120703 | orchestrator | 2025-07-12 20:22:54.120707 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-07-12 20:22:54.120711 | orchestrator | Saturday 12 July 2025 20:20:54 +0000 (0:00:00.896) 0:04:39.268 ********* 2025-07-12 20:22:54.120716 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:22:54.120725 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.120730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:22:54.120855 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.120863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:22:54.120867 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.120871 | orchestrator | 2025-07-12 20:22:54.120876 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-07-12 20:22:54.120880 | orchestrator | Saturday 12 July 2025 20:20:56 +0000 (0:00:01.738) 0:04:41.007 ********* 2025-07-12 20:22:54.120885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:22:54.120889 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.120894 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:22:54.120898 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.120907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-07-12 20:22:54.120911 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.120916 | orchestrator | 2025-07-12 20:22:54.120920 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-07-12 20:22:54.120928 | orchestrator | Saturday 12 July 2025 20:20:57 +0000 (0:00:01.113) 0:04:42.120 ********* 2025-07-12 20:22:54.120932 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.120937 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.120941 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.120946 | orchestrator | 2025-07-12 20:22:54.120950 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 20:22:54.120959 | orchestrator | Saturday 12 July 2025 20:20:59 +0000 (0:00:01.491) 0:04:43.612 ********* 2025-07-12 20:22:54.120963 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.120968 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.120972 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.120977 | orchestrator | 2025-07-12 20:22:54.120981 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 20:22:54.120985 | orchestrator | Saturday 12 July 2025 20:21:01 +0000 (0:00:02.799) 0:04:46.411 ********* 2025-07-12 20:22:54.120990 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.120994 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.120999 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.121003 | orchestrator | 2025-07-12 20:22:54.121008 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-07-12 20:22:54.121012 | orchestrator | Saturday 12 July 2025 20:21:04 +0000 (0:00:02.306) 0:04:48.718 ********* 2025-07-12 20:22:54.121017 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-07-12 20:22:54.121021 | orchestrator | 2025-07-12 20:22:54.121026 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-07-12 20:22:54.121030 | orchestrator | Saturday 12 July 2025 20:21:05 +0000 (0:00:01.208) 0:04:49.926 ********* 2025-07-12 20:22:54.121035 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:22:54.121039 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.121043 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:22:54.121048 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.121052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:22:54.121057 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.121061 | orchestrator | 2025-07-12 20:22:54.121065 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-07-12 20:22:54.121070 | orchestrator | Saturday 12 July 2025 20:21:06 +0000 (0:00:01.294) 0:04:51.221 ********* 2025-07-12 20:22:54.121074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:22:54.121082 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.121102 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:22:54.121107 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.121111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-07-12 20:22:54.121116 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.121120 | orchestrator | 2025-07-12 20:22:54.121124 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-07-12 20:22:54.121128 | orchestrator | Saturday 12 July 2025 20:21:07 +0000 (0:00:01.209) 0:04:52.430 ********* 2025-07-12 20:22:54.121132 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.121244 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.121251 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.121255 | orchestrator | 2025-07-12 20:22:54.121259 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-07-12 20:22:54.121269 | orchestrator | Saturday 12 July 2025 20:21:09 +0000 (0:00:01.657) 0:04:54.088 ********* 2025-07-12 20:22:54.121274 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.121278 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.121282 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.121286 | orchestrator | 2025-07-12 20:22:54.121290 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-07-12 20:22:54.121294 | orchestrator | Saturday 12 July 2025 20:21:11 +0000 (0:00:02.210) 0:04:56.298 ********* 2025-07-12 20:22:54.121299 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.121324 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.121329 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.121333 | orchestrator | 2025-07-12 20:22:54.121337 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-07-12 20:22:54.121438 | orchestrator | Saturday 12 July 2025 20:21:14 +0000 (0:00:03.103) 0:04:59.402 ********* 2025-07-12 20:22:54.121504 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.121510 | orchestrator | 2025-07-12 20:22:54.121514 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-07-12 20:22:54.121518 | orchestrator | Saturday 12 July 2025 20:21:16 +0000 (0:00:01.620) 0:05:01.023 ********* 2025-07-12 20:22:54.121523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.121540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.121561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:22:54.121567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:22:54.121571 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.121599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.121620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.121625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:22:54.121629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.121650 | orchestrator | 2025-07-12 20:22:54.121654 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-07-12 20:22:54.121659 | orchestrator | Saturday 12 July 2025 20:21:19 +0000 (0:00:03.408) 0:05:04.431 ********* 2025-07-12 20:22:54.121674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.121679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:22:54.121683 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.121805 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.121824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.121829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.121834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:22:54.121838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:22:54.121846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:22:54.121895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.121900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:22:54.121904 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.121912 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.121917 | orchestrator | 2025-07-12 20:22:54.121921 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-07-12 20:22:54.121925 | orchestrator | Saturday 12 July 2025 20:21:20 +0000 (0:00:00.743) 0:05:05.174 ********* 2025-07-12 20:22:54.121929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:22:54.121934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:22:54.121939 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.121943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:22:54.121947 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:22:54.122006 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.122012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:22:54.122309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-07-12 20:22:54.122318 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.122323 | orchestrator | 2025-07-12 20:22:54.122328 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-07-12 20:22:54.122332 | orchestrator | Saturday 12 July 2025 20:21:21 +0000 (0:00:01.245) 0:05:06.420 ********* 2025-07-12 20:22:54.122337 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.122341 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.122346 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.122350 | orchestrator | 2025-07-12 20:22:54.122360 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-07-12 20:22:54.122365 | orchestrator | Saturday 12 July 2025 20:21:23 +0000 (0:00:01.532) 0:05:07.952 ********* 2025-07-12 20:22:54.122370 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.122374 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.122378 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.122383 | orchestrator | 2025-07-12 20:22:54.122388 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-07-12 20:22:54.122392 | orchestrator | Saturday 12 July 2025 20:21:25 +0000 (0:00:02.151) 0:05:10.104 ********* 2025-07-12 20:22:54.122409 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.122415 | orchestrator | 2025-07-12 20:22:54.122419 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-07-12 20:22:54.122424 | orchestrator | Saturday 12 July 2025 20:21:27 +0000 (0:00:01.675) 0:05:11.779 ********* 2025-07-12 20:22:54.122430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:22:54.122444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:22:54.122449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:22:54.122478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:22:54.122495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:22:54.122524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:22:54.122528 | orchestrator | 2025-07-12 20:22:54.122533 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-07-12 20:22:54.122537 | orchestrator | Saturday 12 July 2025 20:21:32 +0000 (0:00:05.237) 0:05:17.017 ********* 2025-07-12 20:22:54.122541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:22:54.122549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:22:54.122565 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:22:54.122574 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.122579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:22:54.122584 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.122588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:22:54.122595 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:22:54.122600 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.122604 | orchestrator | 2025-07-12 20:22:54.122608 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-07-12 20:22:54.122612 | orchestrator | Saturday 12 July 2025 20:21:33 +0000 (0:00:00.768) 0:05:17.786 ********* 2025-07-12 20:22:54.122627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 20:22:54.122638 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:22:54.122643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:22:54.122647 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.122652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 20:22:54.122656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:22:54.122660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:22:54.122664 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.122668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-07-12 20:22:54.122672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:22:54.122680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-07-12 20:22:54.122684 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.122688 | orchestrator | 2025-07-12 20:22:54.122692 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-07-12 20:22:54.122696 | orchestrator | Saturday 12 July 2025 20:21:34 +0000 (0:00:01.267) 0:05:19.053 ********* 2025-07-12 20:22:54.122700 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.122704 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.122708 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.122712 | orchestrator | 2025-07-12 20:22:54.122716 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-07-12 20:22:54.122720 | orchestrator | Saturday 12 July 2025 20:21:34 +0000 (0:00:00.500) 0:05:19.554 ********* 2025-07-12 20:22:54.122724 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.122728 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.122732 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.122737 | orchestrator | 2025-07-12 20:22:54.122741 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-07-12 20:22:54.122745 | orchestrator | Saturday 12 July 2025 20:21:36 +0000 (0:00:01.399) 0:05:20.953 ********* 2025-07-12 20:22:54.122749 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.122753 | orchestrator | 2025-07-12 20:22:54.122757 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-07-12 20:22:54.122761 | orchestrator | Saturday 12 July 2025 20:21:37 +0000 (0:00:01.419) 0:05:22.373 ********* 2025-07-12 20:22:54.122768 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:22:54.122787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:22:54.122793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.122806 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:22:54.122810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:22:54.122821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.122845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:22:54.122850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:22:54.122854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122868 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.122879 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:22:54.122884 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:22:54.122889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:22:54.122894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:22:54.122904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122911 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122916 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122920 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.122929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.122933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:22:54.122946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:22:54.122951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.122964 | orchestrator | 2025-07-12 20:22:54.122968 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-07-12 20:22:54.122972 | orchestrator | Saturday 12 July 2025 20:21:42 +0000 (0:00:04.659) 0:05:27.032 ********* 2025-07-12 20:22:54.122976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 20:22:54.122985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:22:54.122989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.122999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.123009 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 20:22:54.123014 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:22:54.123021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123026 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.123040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 20:22:54.123045 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:22:54.123053 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.123074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 20:22:54.123081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 20:22:54.123086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:22:54.123090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:22:54.123099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.123125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.123129 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 20:22:54.123142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-07-12 20:22:54.123147 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123154 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:22:54.123161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:22:54.123165 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123169 | orchestrator | 2025-07-12 20:22:54.123173 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-07-12 20:22:54.123177 | orchestrator | Saturday 12 July 2025 20:21:43 +0000 (0:00:00.881) 0:05:27.913 ********* 2025-07-12 20:22:54.123182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 20:22:54.123186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 20:22:54.123191 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:22:54.123200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:22:54.123204 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 20:22:54.123212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 20:22:54.123216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:22:54.123221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:22:54.123225 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-07-12 20:22:54.123233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-07-12 20:22:54.123237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:22:54.123244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-07-12 20:22:54.123249 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123253 | orchestrator | 2025-07-12 20:22:54.123257 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-07-12 20:22:54.123261 | orchestrator | Saturday 12 July 2025 20:21:44 +0000 (0:00:01.045) 0:05:28.959 ********* 2025-07-12 20:22:54.123267 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123271 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123275 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123279 | orchestrator | 2025-07-12 20:22:54.123283 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-07-12 20:22:54.123287 | orchestrator | Saturday 12 July 2025 20:21:45 +0000 (0:00:00.880) 0:05:29.840 ********* 2025-07-12 20:22:54.123291 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123295 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123299 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123303 | orchestrator | 2025-07-12 20:22:54.123307 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-07-12 20:22:54.123311 | orchestrator | Saturday 12 July 2025 20:21:46 +0000 (0:00:01.396) 0:05:31.236 ********* 2025-07-12 20:22:54.123330 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.123335 | orchestrator | 2025-07-12 20:22:54.123339 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-07-12 20:22:54.123343 | orchestrator | Saturday 12 July 2025 20:21:48 +0000 (0:00:01.514) 0:05:32.750 ********* 2025-07-12 20:22:54.123347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:22:54.123352 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:22:54.123359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-07-12 20:22:54.123364 | orchestrator | 2025-07-12 20:22:54.123368 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-07-12 20:22:54.123372 | orchestrator | Saturday 12 July 2025 20:21:51 +0000 (0:00:02.993) 0:05:35.744 ********* 2025-07-12 20:22:54.123379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 20:22:54.123387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 20:22:54.123391 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123395 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-07-12 20:22:54.123404 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123408 | orchestrator | 2025-07-12 20:22:54.123412 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-07-12 20:22:54.123416 | orchestrator | Saturday 12 July 2025 20:21:51 +0000 (0:00:00.434) 0:05:36.179 ********* 2025-07-12 20:22:54.123421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 20:22:54.123425 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 20:22:54.123433 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-07-12 20:22:54.123444 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123448 | orchestrator | 2025-07-12 20:22:54.123452 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-07-12 20:22:54.123478 | orchestrator | Saturday 12 July 2025 20:21:52 +0000 (0:00:00.650) 0:05:36.830 ********* 2025-07-12 20:22:54.123482 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123486 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123490 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123494 | orchestrator | 2025-07-12 20:22:54.123498 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-07-12 20:22:54.123505 | orchestrator | Saturday 12 July 2025 20:21:53 +0000 (0:00:01.173) 0:05:38.003 ********* 2025-07-12 20:22:54.123509 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123513 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123517 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123521 | orchestrator | 2025-07-12 20:22:54.123525 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-07-12 20:22:54.123529 | orchestrator | Saturday 12 July 2025 20:21:54 +0000 (0:00:01.446) 0:05:39.449 ********* 2025-07-12 20:22:54.123533 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:22:54.123537 | orchestrator | 2025-07-12 20:22:54.123541 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-07-12 20:22:54.123546 | orchestrator | Saturday 12 July 2025 20:21:56 +0000 (0:00:01.542) 0:05:40.991 ********* 2025-07-12 20:22:54.123588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.123594 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.123599 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.123612 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.123617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.123621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-07-12 20:22:54.123626 | orchestrator | 2025-07-12 20:22:54.123630 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-07-12 20:22:54.123634 | orchestrator | Saturday 12 July 2025 20:22:02 +0000 (0:00:06.509) 0:05:47.501 ********* 2025-07-12 20:22:54.123638 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.123653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.123657 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.123666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.123670 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.123686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:5.0.1.20250711', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-07-12 20:22:54.123690 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123694 | orchestrator | 2025-07-12 20:22:54.123698 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-07-12 20:22:54.123705 | orchestrator | Saturday 12 July 2025 20:22:04 +0000 (0:00:01.219) 0:05:48.720 ********* 2025-07-12 20:22:54.123709 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123726 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123739 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123743 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123747 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-07-12 20:22:54.123772 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123776 | orchestrator | 2025-07-12 20:22:54.123780 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-07-12 20:22:54.123784 | orchestrator | Saturday 12 July 2025 20:22:05 +0000 (0:00:00.948) 0:05:49.668 ********* 2025-07-12 20:22:54.123788 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.123792 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.123796 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.123800 | orchestrator | 2025-07-12 20:22:54.123804 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-07-12 20:22:54.123935 | orchestrator | Saturday 12 July 2025 20:22:06 +0000 (0:00:01.314) 0:05:50.983 ********* 2025-07-12 20:22:54.123941 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.123945 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.123949 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.123953 | orchestrator | 2025-07-12 20:22:54.123957 | orchestrator | TASK [include_role : swift] **************************************************** 2025-07-12 20:22:54.123962 | orchestrator | Saturday 12 July 2025 20:22:08 +0000 (0:00:02.226) 0:05:53.210 ********* 2025-07-12 20:22:54.123965 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.123970 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.123974 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.123978 | orchestrator | 2025-07-12 20:22:54.123982 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-07-12 20:22:54.124043 | orchestrator | Saturday 12 July 2025 20:22:09 +0000 (0:00:00.675) 0:05:53.885 ********* 2025-07-12 20:22:54.124048 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124052 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124056 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124060 | orchestrator | 2025-07-12 20:22:54.124064 | orchestrator | TASK [include_role : trove] **************************************************** 2025-07-12 20:22:54.124072 | orchestrator | Saturday 12 July 2025 20:22:09 +0000 (0:00:00.346) 0:05:54.232 ********* 2025-07-12 20:22:54.124076 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124080 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124085 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124089 | orchestrator | 2025-07-12 20:22:54.124093 | orchestrator | TASK [include_role : venus] **************************************************** 2025-07-12 20:22:54.124097 | orchestrator | Saturday 12 July 2025 20:22:09 +0000 (0:00:00.323) 0:05:54.555 ********* 2025-07-12 20:22:54.124101 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124105 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124109 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124113 | orchestrator | 2025-07-12 20:22:54.124117 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-07-12 20:22:54.124121 | orchestrator | Saturday 12 July 2025 20:22:10 +0000 (0:00:00.348) 0:05:54.904 ********* 2025-07-12 20:22:54.124125 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124129 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124133 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124137 | orchestrator | 2025-07-12 20:22:54.124142 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-07-12 20:22:54.124146 | orchestrator | Saturday 12 July 2025 20:22:11 +0000 (0:00:00.696) 0:05:55.600 ********* 2025-07-12 20:22:54.124150 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124154 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124164 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124168 | orchestrator | 2025-07-12 20:22:54.124172 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-07-12 20:22:54.124208 | orchestrator | Saturday 12 July 2025 20:22:11 +0000 (0:00:00.609) 0:05:56.210 ********* 2025-07-12 20:22:54.124213 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.124217 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.124221 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.124225 | orchestrator | 2025-07-12 20:22:54.124229 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-07-12 20:22:54.124233 | orchestrator | Saturday 12 July 2025 20:22:12 +0000 (0:00:00.687) 0:05:56.897 ********* 2025-07-12 20:22:54.124237 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.124242 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.124246 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.124250 | orchestrator | 2025-07-12 20:22:54.124254 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-07-12 20:22:54.124258 | orchestrator | Saturday 12 July 2025 20:22:13 +0000 (0:00:00.693) 0:05:57.591 ********* 2025-07-12 20:22:54.124262 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.124266 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.124270 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.124274 | orchestrator | 2025-07-12 20:22:54.124278 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-07-12 20:22:54.124282 | orchestrator | Saturday 12 July 2025 20:22:13 +0000 (0:00:00.923) 0:05:58.514 ********* 2025-07-12 20:22:54.124286 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.124290 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.124294 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.124298 | orchestrator | 2025-07-12 20:22:54.124302 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-07-12 20:22:54.124306 | orchestrator | Saturday 12 July 2025 20:22:14 +0000 (0:00:00.912) 0:05:59.427 ********* 2025-07-12 20:22:54.124310 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.124314 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.124318 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.124322 | orchestrator | 2025-07-12 20:22:54.124326 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-07-12 20:22:54.124330 | orchestrator | Saturday 12 July 2025 20:22:15 +0000 (0:00:00.874) 0:06:00.302 ********* 2025-07-12 20:22:54.124334 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.124339 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.124343 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.124347 | orchestrator | 2025-07-12 20:22:54.124351 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-07-12 20:22:54.124355 | orchestrator | Saturday 12 July 2025 20:22:24 +0000 (0:00:08.503) 0:06:08.805 ********* 2025-07-12 20:22:54.124359 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.124363 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.124367 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.124371 | orchestrator | 2025-07-12 20:22:54.124425 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-07-12 20:22:54.124437 | orchestrator | Saturday 12 July 2025 20:22:24 +0000 (0:00:00.732) 0:06:09.538 ********* 2025-07-12 20:22:54.124441 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.124445 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.124455 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.124471 | orchestrator | 2025-07-12 20:22:54.124475 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-07-12 20:22:54.124479 | orchestrator | Saturday 12 July 2025 20:22:33 +0000 (0:00:08.589) 0:06:18.128 ********* 2025-07-12 20:22:54.124483 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.124488 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.124492 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.124496 | orchestrator | 2025-07-12 20:22:54.124500 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-07-12 20:22:54.124508 | orchestrator | Saturday 12 July 2025 20:22:37 +0000 (0:00:03.774) 0:06:21.903 ********* 2025-07-12 20:22:54.124512 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:22:54.124555 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:22:54.124560 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:22:54.124565 | orchestrator | 2025-07-12 20:22:54.124569 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-07-12 20:22:54.124576 | orchestrator | Saturday 12 July 2025 20:22:42 +0000 (0:00:05.080) 0:06:26.983 ********* 2025-07-12 20:22:54.124580 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124584 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124588 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124592 | orchestrator | 2025-07-12 20:22:54.124596 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-07-12 20:22:54.124600 | orchestrator | Saturday 12 July 2025 20:22:42 +0000 (0:00:00.364) 0:06:27.348 ********* 2025-07-12 20:22:54.124604 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124641 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124646 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124650 | orchestrator | 2025-07-12 20:22:54.124654 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-07-12 20:22:54.124658 | orchestrator | Saturday 12 July 2025 20:22:43 +0000 (0:00:00.440) 0:06:27.788 ********* 2025-07-12 20:22:54.124662 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124666 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124670 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124674 | orchestrator | 2025-07-12 20:22:54.124678 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-07-12 20:22:54.124682 | orchestrator | Saturday 12 July 2025 20:22:43 +0000 (0:00:00.410) 0:06:28.199 ********* 2025-07-12 20:22:54.124686 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124690 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124694 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124698 | orchestrator | 2025-07-12 20:22:54.124702 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-07-12 20:22:54.124706 | orchestrator | Saturday 12 July 2025 20:22:44 +0000 (0:00:00.724) 0:06:28.924 ********* 2025-07-12 20:22:54.124710 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124714 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124718 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124722 | orchestrator | 2025-07-12 20:22:54.124726 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-07-12 20:22:54.124730 | orchestrator | Saturday 12 July 2025 20:22:44 +0000 (0:00:00.365) 0:06:29.289 ********* 2025-07-12 20:22:54.124734 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:22:54.124738 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:22:54.124742 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:22:54.124746 | orchestrator | 2025-07-12 20:22:54.124750 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-07-12 20:22:54.124754 | orchestrator | Saturday 12 July 2025 20:22:45 +0000 (0:00:00.387) 0:06:29.677 ********* 2025-07-12 20:22:54.124758 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.124763 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.124767 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.124771 | orchestrator | 2025-07-12 20:22:54.124775 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-07-12 20:22:54.124779 | orchestrator | Saturday 12 July 2025 20:22:50 +0000 (0:00:04.893) 0:06:34.571 ********* 2025-07-12 20:22:54.124783 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:22:54.124787 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:22:54.124791 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:22:54.124795 | orchestrator | 2025-07-12 20:22:54.124799 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:22:54.124809 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 20:22:54.124813 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 20:22:54.124818 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-07-12 20:22:54.124822 | orchestrator | 2025-07-12 20:22:54.124826 | orchestrator | 2025-07-12 20:22:54.124830 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:22:54.124849 | orchestrator | Saturday 12 July 2025 20:22:51 +0000 (0:00:01.320) 0:06:35.892 ********* 2025-07-12 20:22:54.124854 | orchestrator | =============================================================================== 2025-07-12 20:22:54.124858 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 9.19s 2025-07-12 20:22:54.124862 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 8.59s 2025-07-12 20:22:54.124866 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.50s 2025-07-12 20:22:54.124870 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.51s 2025-07-12 20:22:54.124874 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.24s 2025-07-12 20:22:54.124878 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.19s 2025-07-12 20:22:54.124882 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 5.08s 2025-07-12 20:22:54.124886 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 4.89s 2025-07-12 20:22:54.124890 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.89s 2025-07-12 20:22:54.124894 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.82s 2025-07-12 20:22:54.124898 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.66s 2025-07-12 20:22:54.124902 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.62s 2025-07-12 20:22:54.124906 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 4.58s 2025-07-12 20:22:54.124910 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.54s 2025-07-12 20:22:54.124917 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 4.53s 2025-07-12 20:22:54.124922 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.28s 2025-07-12 20:22:54.124926 | orchestrator | loadbalancer : Copying over keepalived.conf ----------------------------- 4.11s 2025-07-12 20:22:54.124930 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 3.98s 2025-07-12 20:22:54.124934 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 3.93s 2025-07-12 20:22:54.124938 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.90s 2025-07-12 20:22:57.155097 | orchestrator | 2025-07-12 20:22:57 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:22:57.157435 | orchestrator | 2025-07-12 20:22:57 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:22:57.159975 | orchestrator | 2025-07-12 20:22:57 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:22:57.160075 | orchestrator | 2025-07-12 20:22:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:00.202077 | orchestrator | 2025-07-12 20:23:00 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:00.202858 | orchestrator | 2025-07-12 20:23:00 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:00.204226 | orchestrator | 2025-07-12 20:23:00 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:00.204301 | orchestrator | 2025-07-12 20:23:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:03.245359 | orchestrator | 2025-07-12 20:23:03 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:03.251162 | orchestrator | 2025-07-12 20:23:03 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:03.254362 | orchestrator | 2025-07-12 20:23:03 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:03.254419 | orchestrator | 2025-07-12 20:23:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:06.294541 | orchestrator | 2025-07-12 20:23:06 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:06.296816 | orchestrator | 2025-07-12 20:23:06 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:06.298118 | orchestrator | 2025-07-12 20:23:06 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:06.298326 | orchestrator | 2025-07-12 20:23:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:09.334525 | orchestrator | 2025-07-12 20:23:09 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:09.336382 | orchestrator | 2025-07-12 20:23:09 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:09.337009 | orchestrator | 2025-07-12 20:23:09 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:09.337118 | orchestrator | 2025-07-12 20:23:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:12.387824 | orchestrator | 2025-07-12 20:23:12 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:12.388750 | orchestrator | 2025-07-12 20:23:12 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:12.389926 | orchestrator | 2025-07-12 20:23:12 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:12.389960 | orchestrator | 2025-07-12 20:23:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:15.438245 | orchestrator | 2025-07-12 20:23:15 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:15.438321 | orchestrator | 2025-07-12 20:23:15 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:15.441887 | orchestrator | 2025-07-12 20:23:15 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:15.441911 | orchestrator | 2025-07-12 20:23:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:18.470681 | orchestrator | 2025-07-12 20:23:18 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:18.472411 | orchestrator | 2025-07-12 20:23:18 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:18.473872 | orchestrator | 2025-07-12 20:23:18 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:18.473911 | orchestrator | 2025-07-12 20:23:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:21.515661 | orchestrator | 2025-07-12 20:23:21 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:21.516528 | orchestrator | 2025-07-12 20:23:21 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:21.517563 | orchestrator | 2025-07-12 20:23:21 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:21.517645 | orchestrator | 2025-07-12 20:23:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:24.580369 | orchestrator | 2025-07-12 20:23:24 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:24.581896 | orchestrator | 2025-07-12 20:23:24 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:24.584420 | orchestrator | 2025-07-12 20:23:24 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:24.584938 | orchestrator | 2025-07-12 20:23:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:27.633550 | orchestrator | 2025-07-12 20:23:27 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:27.634650 | orchestrator | 2025-07-12 20:23:27 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:27.636491 | orchestrator | 2025-07-12 20:23:27 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:27.636523 | orchestrator | 2025-07-12 20:23:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:30.691974 | orchestrator | 2025-07-12 20:23:30 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:30.694526 | orchestrator | 2025-07-12 20:23:30 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:30.697937 | orchestrator | 2025-07-12 20:23:30 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:30.698224 | orchestrator | 2025-07-12 20:23:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:33.737073 | orchestrator | 2025-07-12 20:23:33 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:33.737647 | orchestrator | 2025-07-12 20:23:33 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:33.739552 | orchestrator | 2025-07-12 20:23:33 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:33.739615 | orchestrator | 2025-07-12 20:23:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:36.791110 | orchestrator | 2025-07-12 20:23:36 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:36.793566 | orchestrator | 2025-07-12 20:23:36 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:36.796930 | orchestrator | 2025-07-12 20:23:36 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:36.796960 | orchestrator | 2025-07-12 20:23:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:39.832184 | orchestrator | 2025-07-12 20:23:39 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:39.833548 | orchestrator | 2025-07-12 20:23:39 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:39.835135 | orchestrator | 2025-07-12 20:23:39 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:39.835160 | orchestrator | 2025-07-12 20:23:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:42.883070 | orchestrator | 2025-07-12 20:23:42 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:42.883177 | orchestrator | 2025-07-12 20:23:42 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:42.884013 | orchestrator | 2025-07-12 20:23:42 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:42.884403 | orchestrator | 2025-07-12 20:23:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:45.921678 | orchestrator | 2025-07-12 20:23:45 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:45.922972 | orchestrator | 2025-07-12 20:23:45 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:45.924939 | orchestrator | 2025-07-12 20:23:45 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:45.925314 | orchestrator | 2025-07-12 20:23:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:48.973571 | orchestrator | 2025-07-12 20:23:48 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:48.975309 | orchestrator | 2025-07-12 20:23:48 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:48.977099 | orchestrator | 2025-07-12 20:23:48 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:48.977156 | orchestrator | 2025-07-12 20:23:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:52.034231 | orchestrator | 2025-07-12 20:23:52 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:52.036062 | orchestrator | 2025-07-12 20:23:52 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:52.037598 | orchestrator | 2025-07-12 20:23:52 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:52.037624 | orchestrator | 2025-07-12 20:23:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:55.087723 | orchestrator | 2025-07-12 20:23:55 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:55.089464 | orchestrator | 2025-07-12 20:23:55 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:55.091569 | orchestrator | 2025-07-12 20:23:55 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:55.091606 | orchestrator | 2025-07-12 20:23:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:23:58.131597 | orchestrator | 2025-07-12 20:23:58 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:23:58.133760 | orchestrator | 2025-07-12 20:23:58 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:23:58.135388 | orchestrator | 2025-07-12 20:23:58 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:23:58.135497 | orchestrator | 2025-07-12 20:23:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:01.185443 | orchestrator | 2025-07-12 20:24:01 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:01.196176 | orchestrator | 2025-07-12 20:24:01 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:01.196373 | orchestrator | 2025-07-12 20:24:01 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:01.198866 | orchestrator | 2025-07-12 20:24:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:04.240003 | orchestrator | 2025-07-12 20:24:04 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:04.241547 | orchestrator | 2025-07-12 20:24:04 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:04.243623 | orchestrator | 2025-07-12 20:24:04 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:04.243663 | orchestrator | 2025-07-12 20:24:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:07.302885 | orchestrator | 2025-07-12 20:24:07 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:07.304529 | orchestrator | 2025-07-12 20:24:07 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:07.306590 | orchestrator | 2025-07-12 20:24:07 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:07.306968 | orchestrator | 2025-07-12 20:24:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:10.359562 | orchestrator | 2025-07-12 20:24:10 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:10.361479 | orchestrator | 2025-07-12 20:24:10 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:10.361879 | orchestrator | 2025-07-12 20:24:10 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:10.362623 | orchestrator | 2025-07-12 20:24:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:13.417969 | orchestrator | 2025-07-12 20:24:13 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:13.419929 | orchestrator | 2025-07-12 20:24:13 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:13.422500 | orchestrator | 2025-07-12 20:24:13 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:13.422525 | orchestrator | 2025-07-12 20:24:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:16.474321 | orchestrator | 2025-07-12 20:24:16 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:16.476327 | orchestrator | 2025-07-12 20:24:16 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:16.477835 | orchestrator | 2025-07-12 20:24:16 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:16.478594 | orchestrator | 2025-07-12 20:24:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:19.531249 | orchestrator | 2025-07-12 20:24:19 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:19.533794 | orchestrator | 2025-07-12 20:24:19 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:19.537638 | orchestrator | 2025-07-12 20:24:19 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:19.537689 | orchestrator | 2025-07-12 20:24:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:22.596792 | orchestrator | 2025-07-12 20:24:22 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:22.600024 | orchestrator | 2025-07-12 20:24:22 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:22.602971 | orchestrator | 2025-07-12 20:24:22 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:22.603235 | orchestrator | 2025-07-12 20:24:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:25.650623 | orchestrator | 2025-07-12 20:24:25 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:25.651892 | orchestrator | 2025-07-12 20:24:25 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:25.653338 | orchestrator | 2025-07-12 20:24:25 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:25.653467 | orchestrator | 2025-07-12 20:24:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:28.698910 | orchestrator | 2025-07-12 20:24:28 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:28.701611 | orchestrator | 2025-07-12 20:24:28 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:28.704338 | orchestrator | 2025-07-12 20:24:28 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:28.704423 | orchestrator | 2025-07-12 20:24:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:31.751092 | orchestrator | 2025-07-12 20:24:31 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:31.752344 | orchestrator | 2025-07-12 20:24:31 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:31.755134 | orchestrator | 2025-07-12 20:24:31 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:31.755180 | orchestrator | 2025-07-12 20:24:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:34.809215 | orchestrator | 2025-07-12 20:24:34 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:34.809350 | orchestrator | 2025-07-12 20:24:34 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:34.811199 | orchestrator | 2025-07-12 20:24:34 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:34.811254 | orchestrator | 2025-07-12 20:24:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:37.873709 | orchestrator | 2025-07-12 20:24:37 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:37.877679 | orchestrator | 2025-07-12 20:24:37 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:37.881463 | orchestrator | 2025-07-12 20:24:37 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:37.881494 | orchestrator | 2025-07-12 20:24:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:40.922155 | orchestrator | 2025-07-12 20:24:40 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:40.922898 | orchestrator | 2025-07-12 20:24:40 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:40.924423 | orchestrator | 2025-07-12 20:24:40 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:40.924615 | orchestrator | 2025-07-12 20:24:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:43.971452 | orchestrator | 2025-07-12 20:24:43 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:43.972965 | orchestrator | 2025-07-12 20:24:43 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:43.974502 | orchestrator | 2025-07-12 20:24:43 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:43.974532 | orchestrator | 2025-07-12 20:24:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:47.022921 | orchestrator | 2025-07-12 20:24:47 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:47.025050 | orchestrator | 2025-07-12 20:24:47 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:47.027767 | orchestrator | 2025-07-12 20:24:47 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:47.027819 | orchestrator | 2025-07-12 20:24:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:50.070112 | orchestrator | 2025-07-12 20:24:50 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:50.071686 | orchestrator | 2025-07-12 20:24:50 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:50.073268 | orchestrator | 2025-07-12 20:24:50 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:50.073309 | orchestrator | 2025-07-12 20:24:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:53.121970 | orchestrator | 2025-07-12 20:24:53 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:53.123573 | orchestrator | 2025-07-12 20:24:53 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:53.126165 | orchestrator | 2025-07-12 20:24:53 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:53.126202 | orchestrator | 2025-07-12 20:24:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:56.184656 | orchestrator | 2025-07-12 20:24:56 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:56.186552 | orchestrator | 2025-07-12 20:24:56 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:56.188271 | orchestrator | 2025-07-12 20:24:56 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:56.188341 | orchestrator | 2025-07-12 20:24:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:24:59.240640 | orchestrator | 2025-07-12 20:24:59 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:24:59.242068 | orchestrator | 2025-07-12 20:24:59 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:24:59.244069 | orchestrator | 2025-07-12 20:24:59 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:24:59.244095 | orchestrator | 2025-07-12 20:24:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:02.303802 | orchestrator | 2025-07-12 20:25:02 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:02.305637 | orchestrator | 2025-07-12 20:25:02 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:02.308362 | orchestrator | 2025-07-12 20:25:02 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:25:02.308409 | orchestrator | 2025-07-12 20:25:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:05.354261 | orchestrator | 2025-07-12 20:25:05 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:05.354885 | orchestrator | 2025-07-12 20:25:05 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:05.356461 | orchestrator | 2025-07-12 20:25:05 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state STARTED 2025-07-12 20:25:05.356565 | orchestrator | 2025-07-12 20:25:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:08.409869 | orchestrator | 2025-07-12 20:25:08 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:08.413280 | orchestrator | 2025-07-12 20:25:08 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:08.416673 | orchestrator | 2025-07-12 20:25:08 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:08.425556 | orchestrator | 2025-07-12 20:25:08 | INFO  | Task 36a98984-c46c-44a2-868e-109b0a1c9d89 is in state SUCCESS 2025-07-12 20:25:08.427540 | orchestrator | 2025-07-12 20:25:08.427588 | orchestrator | 2025-07-12 20:25:08.427598 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-07-12 20:25:08.427606 | orchestrator | 2025-07-12 20:25:08.427614 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-12 20:25:08.427621 | orchestrator | Saturday 12 July 2025 20:13:17 +0000 (0:00:00.965) 0:00:00.965 ********* 2025-07-12 20:25:08.427657 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.427669 | orchestrator | 2025-07-12 20:25:08.427674 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-12 20:25:08.427678 | orchestrator | Saturday 12 July 2025 20:13:18 +0000 (0:00:01.433) 0:00:02.399 ********* 2025-07-12 20:25:08.427682 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.427687 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.427691 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.427695 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.427699 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.427703 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.427707 | orchestrator | 2025-07-12 20:25:08.427711 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-12 20:25:08.427714 | orchestrator | Saturday 12 July 2025 20:13:20 +0000 (0:00:01.927) 0:00:04.327 ********* 2025-07-12 20:25:08.427718 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.427722 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.427726 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.427730 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.427734 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.427737 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.427741 | orchestrator | 2025-07-12 20:25:08.427812 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-12 20:25:08.427857 | orchestrator | Saturday 12 July 2025 20:13:21 +0000 (0:00:00.907) 0:00:05.234 ********* 2025-07-12 20:25:08.427863 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.427874 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.427879 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.427882 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.427886 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.427890 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.427894 | orchestrator | 2025-07-12 20:25:08.427898 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-12 20:25:08.427901 | orchestrator | Saturday 12 July 2025 20:13:23 +0000 (0:00:01.383) 0:00:06.618 ********* 2025-07-12 20:25:08.427905 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.427909 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.427913 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.427917 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.427920 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.427924 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.427928 | orchestrator | 2025-07-12 20:25:08.427932 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-12 20:25:08.427936 | orchestrator | Saturday 12 July 2025 20:13:24 +0000 (0:00:01.213) 0:00:07.831 ********* 2025-07-12 20:25:08.427940 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.427943 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.427978 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.427982 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.427986 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.427990 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.427994 | orchestrator | 2025-07-12 20:25:08.428174 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-12 20:25:08.428183 | orchestrator | Saturday 12 July 2025 20:13:25 +0000 (0:00:01.020) 0:00:08.852 ********* 2025-07-12 20:25:08.428187 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.428191 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.428195 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.428199 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.428203 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.428207 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.428211 | orchestrator | 2025-07-12 20:25:08.428215 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-12 20:25:08.428228 | orchestrator | Saturday 12 July 2025 20:13:26 +0000 (0:00:01.352) 0:00:10.205 ********* 2025-07-12 20:25:08.428232 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.428248 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.428252 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.428256 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.428260 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.428264 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.428267 | orchestrator | 2025-07-12 20:25:08.428271 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-12 20:25:08.428275 | orchestrator | Saturday 12 July 2025 20:13:27 +0000 (0:00:01.126) 0:00:11.331 ********* 2025-07-12 20:25:08.428279 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.428283 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.428287 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.428291 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.428294 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.428298 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.428302 | orchestrator | 2025-07-12 20:25:08.428307 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-12 20:25:08.428310 | orchestrator | Saturday 12 July 2025 20:13:29 +0000 (0:00:01.315) 0:00:12.646 ********* 2025-07-12 20:25:08.428314 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:25:08.428319 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:25:08.428323 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:25:08.428327 | orchestrator | 2025-07-12 20:25:08.428330 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-12 20:25:08.428334 | orchestrator | Saturday 12 July 2025 20:13:30 +0000 (0:00:01.033) 0:00:13.680 ********* 2025-07-12 20:25:08.428338 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.428378 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.428382 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.428386 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.428390 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.428394 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.428398 | orchestrator | 2025-07-12 20:25:08.428426 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-12 20:25:08.428430 | orchestrator | Saturday 12 July 2025 20:13:32 +0000 (0:00:01.886) 0:00:15.567 ********* 2025-07-12 20:25:08.428434 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:25:08.428438 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:25:08.428442 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:25:08.428446 | orchestrator | 2025-07-12 20:25:08.428450 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-12 20:25:08.428453 | orchestrator | Saturday 12 July 2025 20:13:35 +0000 (0:00:03.083) 0:00:18.650 ********* 2025-07-12 20:25:08.428457 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:25:08.428462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:25:08.428465 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:25:08.428469 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.428473 | orchestrator | 2025-07-12 20:25:08.428477 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-12 20:25:08.428483 | orchestrator | Saturday 12 July 2025 20:13:36 +0000 (0:00:01.063) 0:00:19.714 ********* 2025-07-12 20:25:08.428491 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.428499 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.428508 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.428512 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.428515 | orchestrator | 2025-07-12 20:25:08.428519 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-12 20:25:08.428523 | orchestrator | Saturday 12 July 2025 20:13:38 +0000 (0:00:01.909) 0:00:21.624 ********* 2025-07-12 20:25:08.428529 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.428535 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.428584 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.428621 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.428625 | orchestrator | 2025-07-12 20:25:08.428629 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-12 20:25:08.428633 | orchestrator | Saturday 12 July 2025 20:13:38 +0000 (0:00:00.742) 0:00:22.366 ********* 2025-07-12 20:25:08.428640 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-12 20:13:32.831969', 'end': '2025-07-12 20:13:33.079394', 'delta': '0:00:00.247425', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.428844 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-12 20:13:33.869196', 'end': '2025-07-12 20:13:34.099641', 'delta': '0:00:00.230445', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.428863 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-12 20:13:34.673645', 'end': '2025-07-12 20:13:34.907973', 'delta': '0:00:00.234328', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.428877 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.428881 | orchestrator | 2025-07-12 20:25:08.428885 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-12 20:25:08.428889 | orchestrator | Saturday 12 July 2025 20:13:39 +0000 (0:00:00.449) 0:00:22.815 ********* 2025-07-12 20:25:08.428893 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.428897 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.428901 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.428905 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.428909 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.428912 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.428916 | orchestrator | 2025-07-12 20:25:08.428920 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-12 20:25:08.428924 | orchestrator | Saturday 12 July 2025 20:13:42 +0000 (0:00:02.732) 0:00:25.548 ********* 2025-07-12 20:25:08.428928 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.428932 | orchestrator | 2025-07-12 20:25:08.428936 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-12 20:25:08.428939 | orchestrator | Saturday 12 July 2025 20:13:43 +0000 (0:00:00.936) 0:00:26.485 ********* 2025-07-12 20:25:08.428943 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.428947 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.428951 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.428955 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.428959 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.428962 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.428966 | orchestrator | 2025-07-12 20:25:08.428970 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-12 20:25:08.428974 | orchestrator | Saturday 12 July 2025 20:13:44 +0000 (0:00:01.603) 0:00:28.088 ********* 2025-07-12 20:25:08.428978 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.428982 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.428985 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.428989 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.428993 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.428997 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.429001 | orchestrator | 2025-07-12 20:25:08.429004 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 20:25:08.429008 | orchestrator | Saturday 12 July 2025 20:13:46 +0000 (0:00:01.338) 0:00:29.427 ********* 2025-07-12 20:25:08.429012 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429016 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.429020 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.429024 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.429028 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.429031 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.429035 | orchestrator | 2025-07-12 20:25:08.429039 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-12 20:25:08.429043 | orchestrator | Saturday 12 July 2025 20:13:47 +0000 (0:00:01.030) 0:00:30.457 ********* 2025-07-12 20:25:08.429047 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429051 | orchestrator | 2025-07-12 20:25:08.429055 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-12 20:25:08.429062 | orchestrator | Saturday 12 July 2025 20:13:47 +0000 (0:00:00.164) 0:00:30.622 ********* 2025-07-12 20:25:08.429066 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429070 | orchestrator | 2025-07-12 20:25:08.429073 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 20:25:08.429077 | orchestrator | Saturday 12 July 2025 20:13:47 +0000 (0:00:00.240) 0:00:30.862 ********* 2025-07-12 20:25:08.429081 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429085 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.429089 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.429092 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.429096 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.429100 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.429104 | orchestrator | 2025-07-12 20:25:08.429108 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-12 20:25:08.429165 | orchestrator | Saturday 12 July 2025 20:13:48 +0000 (0:00:00.864) 0:00:31.726 ********* 2025-07-12 20:25:08.429172 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429176 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.429180 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.429184 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.429188 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.429192 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.429195 | orchestrator | 2025-07-12 20:25:08.429199 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-12 20:25:08.429203 | orchestrator | Saturday 12 July 2025 20:13:49 +0000 (0:00:01.517) 0:00:33.244 ********* 2025-07-12 20:25:08.429207 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429218 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.429222 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.429226 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.429273 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.429277 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.429280 | orchestrator | 2025-07-12 20:25:08.429284 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-12 20:25:08.429288 | orchestrator | Saturday 12 July 2025 20:13:51 +0000 (0:00:01.165) 0:00:34.409 ********* 2025-07-12 20:25:08.429292 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429438 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.429451 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.429455 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.429459 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.429463 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.429472 | orchestrator | 2025-07-12 20:25:08.429477 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-12 20:25:08.429481 | orchestrator | Saturday 12 July 2025 20:13:52 +0000 (0:00:01.432) 0:00:35.842 ********* 2025-07-12 20:25:08.429488 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429493 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.429497 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.429501 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.429505 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.429627 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.429635 | orchestrator | 2025-07-12 20:25:08.429639 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-12 20:25:08.429643 | orchestrator | Saturday 12 July 2025 20:13:53 +0000 (0:00:01.345) 0:00:37.187 ********* 2025-07-12 20:25:08.429647 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429652 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.429659 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.429664 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.429667 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.429671 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.429682 | orchestrator | 2025-07-12 20:25:08.429686 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-12 20:25:08.429690 | orchestrator | Saturday 12 July 2025 20:13:54 +0000 (0:00:01.102) 0:00:38.289 ********* 2025-07-12 20:25:08.429694 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429698 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.429702 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.429705 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.429709 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.429713 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.429717 | orchestrator | 2025-07-12 20:25:08.429720 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-12 20:25:08.429725 | orchestrator | Saturday 12 July 2025 20:13:55 +0000 (0:00:01.017) 0:00:39.307 ********* 2025-07-12 20:25:08.429733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429738 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429788 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62f31422-022f-413d-8784-b59e1dab1027', 'scsi-SQEMU_QEMU_HARDDISK_62f31422-022f-413d-8784-b59e1dab1027'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62f31422-022f-413d-8784-b59e1dab1027-part1', 'scsi-SQEMU_QEMU_HARDDISK_62f31422-022f-413d-8784-b59e1dab1027-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.429815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.429820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429841 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429846 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429854 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429862 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3da5b399-01e7-4def-b33a-29c13319e0e2', 'scsi-SQEMU_QEMU_HARDDISK_3da5b399-01e7-4def-b33a-29c13319e0e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3da5b399-01e7-4def-b33a-29c13319e0e2-part1', 'scsi-SQEMU_QEMU_HARDDISK_3da5b399-01e7-4def-b33a-29c13319e0e2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.429895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.429900 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.429904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429927 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.429931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2900e8ba-3a3c-419f-a89d-80346bc85f37', 'scsi-SQEMU_QEMU_HARDDISK_2900e8ba-3a3c-419f-a89d-80346bc85f37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2900e8ba-3a3c-419f-a89d-80346bc85f37-part1', 'scsi-SQEMU_QEMU_HARDDISK_2900e8ba-3a3c-419f-a89d-80346bc85f37-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.429963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.429971 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a733058e--5b74--5553--b3bf--66d1cbf46d31-osd--block--a733058e--5b74--5553--b3bf--66d1cbf46d31', 'dm-uuid-LVM-LRBCVsAuQ4NYflbyU4pf0eP05SUfKllFaKERMg5N4jfaILvyunRxXIrcd5Q5Pt52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429977 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d632655--ba67--5245--89a0--0cb971b00289-osd--block--8d632655--ba67--5245--89a0--0cb971b00289', 'dm-uuid-LVM-3UXIhqn3wzLYuFvUWcZP6rcvoyj26863wapNWwVMrWeewxCuHJKeNf5YRrv83XX5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429981 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429985 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.429993 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.429996 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430066 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430075 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430079 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2ea885c--c09d--528a--8e30--9d64ecae89b3-osd--block--c2ea885c--c09d--528a--8e30--9d64ecae89b3', 'dm-uuid-LVM-eceZmWe6OR1E2fwKczuSYydrQhn8MZPRD9CNbWWHLbVocXD2HfKLhNJrURaTYB2l'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430083 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430087 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5037a2b3--768c--53ee--9f72--df4915d4fb6f-osd--block--5037a2b3--768c--53ee--9f72--df4915d4fb6f', 'dm-uuid-LVM-dTgv11CN0erm79ZzAiH5PP2f99pdpaj35eJpv4pXG1yMce0lvQ11QBsEbBMmsfDu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0', 'scsi-SQEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0-part1', 'scsi-SQEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430113 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430121 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a733058e--5b74--5553--b3bf--66d1cbf46d31-osd--block--a733058e--5b74--5553--b3bf--66d1cbf46d31'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oXYt9l-zYKn-vfrZ-WuMo-ABm2-vvvj-AvKrQB', 'scsi-0QEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1', 'scsi-SQEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430130 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430134 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430138 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8d632655--ba67--5245--89a0--0cb971b00289-osd--block--8d632655--ba67--5245--89a0--0cb971b00289'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BcF1zn-BnUo-Islx-uRhu-9F36-gT48-C6uyQB', 'scsi-0QEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a', 'scsi-SQEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430146 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430172 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430177 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2', 'scsi-SQEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430182 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c', 'scsi-SQEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c2ea885c--c09d--528a--8e30--9d64ecae89b3-osd--block--c2ea885c--c09d--528a--8e30--9d64ecae89b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gPOHnc-FYBr-qF2E-WZSS-16C9-ZJrz-QQ3ji2', 'scsi-0QEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c', 'scsi-SQEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430191 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430261 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5037a2b3--768c--53ee--9f72--df4915d4fb6f-osd--block--5037a2b3--768c--53ee--9f72--df4915d4fb6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cK655S-Nnlc-nB0c-l8ma-s1bR-vBMZ-tbphde', 'scsi-0QEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5', 'scsi-SQEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430272 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.430276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808', 'scsi-SQEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430280 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430284 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.430288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3d06229f--4e10--52c4--b396--8cb508609dff-osd--block--3d06229f--4e10--52c4--b396--8cb508609dff', 'dm-uuid-LVM-SEvrrBUsOXsdHRPsgBOMCYdYCHW0QRZTuSP5eWfnVSAnOZj74dOgbCxEA9w4bH0E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430293 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--81820e8a--af8a--5909--b466--981a4bed2414-osd--block--81820e8a--af8a--5909--b466--981a4bed2414', 'dm-uuid-LVM-Vxecnppb2BKZw0ce7eQ0jWxT7TNCX9gURk3jwgF0EUQNBGKug81YnrkDpxAK1m14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430301 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430374 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430383 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430427 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430581 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:25:08.430598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf', 'scsi-SQEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3d06229f--4e10--52c4--b396--8cb508609dff-osd--block--3d06229f--4e10--52c4--b396--8cb508609dff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u9ZO1C-cVZT-KDbP-pCfQ-8lwy-mE9f-DG554V', 'scsi-0QEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4', 'scsi-SQEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430652 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--81820e8a--af8a--5909--b466--981a4bed2414-osd--block--81820e8a--af8a--5909--b466--981a4bed2414'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DJ9JLp-MG6W-EyaC-SY2P-58v9-0aUr-JN9DFN', 'scsi-0QEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346', 'scsi-SQEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6', 'scsi-SQEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:25:08.430674 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.430680 | orchestrator | 2025-07-12 20:25:08.430688 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-12 20:25:08.430694 | orchestrator | Saturday 12 July 2025 20:13:57 +0000 (0:00:01.549) 0:00:40.856 ********* 2025-07-12 20:25:08.430701 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430708 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430720 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430749 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430757 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430764 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430771 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430778 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430785 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62f31422-022f-413d-8784-b59e1dab1027', 'scsi-SQEMU_QEMU_HARDDISK_62f31422-022f-413d-8784-b59e1dab1027'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_62f31422-022f-413d-8784-b59e1dab1027-part1', 'scsi-SQEMU_QEMU_HARDDISK_62f31422-022f-413d-8784-b59e1dab1027-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430820 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-43-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430829 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430836 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430842 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430849 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430939 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430978 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430986 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.430993 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431000 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3da5b399-01e7-4def-b33a-29c13319e0e2', 'scsi-SQEMU_QEMU_HARDDISK_3da5b399-01e7-4def-b33a-29c13319e0e2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3da5b399-01e7-4def-b33a-29c13319e0e2-part1', 'scsi-SQEMU_QEMU_HARDDISK_3da5b399-01e7-4def-b33a-29c13319e0e2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431419 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-40-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431437 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.431472 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431481 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431489 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431496 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431502 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431517 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431524 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431551 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431560 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2900e8ba-3a3c-419f-a89d-80346bc85f37', 'scsi-SQEMU_QEMU_HARDDISK_2900e8ba-3a3c-419f-a89d-80346bc85f37'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_2900e8ba-3a3c-419f-a89d-80346bc85f37-part1', 'scsi-SQEMU_QEMU_HARDDISK_2900e8ba-3a3c-419f-a89d-80346bc85f37-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431568 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-46-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431583 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.431590 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a733058e--5b74--5553--b3bf--66d1cbf46d31-osd--block--a733058e--5b74--5553--b3bf--66d1cbf46d31', 'dm-uuid-LVM-LRBCVsAuQ4NYflbyU4pf0eP05SUfKllFaKERMg5N4jfaILvyunRxXIrcd5Q5Pt52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431598 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d632655--ba67--5245--89a0--0cb971b00289-osd--block--8d632655--ba67--5245--89a0--0cb971b00289', 'dm-uuid-LVM-3UXIhqn3wzLYuFvUWcZP6rcvoyj26863wapNWwVMrWeewxCuHJKeNf5YRrv83XX5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431605 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.431631 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431639 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431646 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431653 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431665 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431671 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431699 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2ea885c--c09d--528a--8e30--9d64ecae89b3-osd--block--c2ea885c--c09d--528a--8e30--9d64ecae89b3', 'dm-uuid-LVM-eceZmWe6OR1E2fwKczuSYydrQhn8MZPRD9CNbWWHLbVocXD2HfKLhNJrURaTYB2l'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431713 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5037a2b3--768c--53ee--9f72--df4915d4fb6f-osd--block--5037a2b3--768c--53ee--9f72--df4915d4fb6f', 'dm-uuid-LVM-dTgv11CN0erm79ZzAiH5PP2f99pdpaj35eJpv4pXG1yMce0lvQ11QBsEbBMmsfDu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431719 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431762 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.431779 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432052 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0', 'scsi-SQEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0-part1', 'scsi-SQEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432070 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432075 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a733058e--5b74--5553--b3bf--66d1cbf46d31-osd--block--a733058e--5b74--5553--b3bf--66d1cbf46d31'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oXYt9l-zYKn-vfrZ-WuMo-ABm2-vvvj-AvKrQB', 'scsi-0QEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1', 'scsi-SQEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432092 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432096 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432120 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8d632655--ba67--5245--89a0--0cb971b00289-osd--block--8d632655--ba67--5245--89a0--0cb971b00289'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BcF1zn-BnUo-Islx-uRhu-9F36-gT48-C6uyQB', 'scsi-0QEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a', 'scsi-SQEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432126 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2', 'scsi-SQEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432133 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432137 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432142 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432146 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.432165 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c', 'scsi-SQEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432170 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c2ea885c--c09d--528a--8e30--9d64ecae89b3-osd--block--c2ea885c--c09d--528a--8e30--9d64ecae89b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gPOHnc-FYBr-qF2E-WZSS-16C9-ZJrz-QQ3ji2', 'scsi-0QEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c', 'scsi-SQEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432178 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3d06229f--4e10--52c4--b396--8cb508609dff-osd--block--3d06229f--4e10--52c4--b396--8cb508609dff', 'dm-uuid-LVM-SEvrrBUsOXsdHRPsgBOMCYdYCHW0QRZTuSP5eWfnVSAnOZj74dOgbCxEA9w4bH0E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432182 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--81820e8a--af8a--5909--b466--981a4bed2414-osd--block--81820e8a--af8a--5909--b466--981a4bed2414', 'dm-uuid-LVM-Vxecnppb2BKZw0ce7eQ0jWxT7TNCX9gURk3jwgF0EUQNBGKug81YnrkDpxAK1m14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432201 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5037a2b3--768c--53ee--9f72--df4915d4fb6f-osd--block--5037a2b3--768c--53ee--9f72--df4915d4fb6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cK655S-Nnlc-nB0c-l8ma-s1bR-vBMZ-tbphde', 'scsi-0QEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5', 'scsi-SQEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432206 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808', 'scsi-SQEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432210 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432218 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432222 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432226 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.432230 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432248 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432253 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432257 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432264 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432268 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432272 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf', 'scsi-SQEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432293 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3d06229f--4e10--52c4--b396--8cb508609dff-osd--block--3d06229f--4e10--52c4--b396--8cb508609dff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u9ZO1C-cVZT-KDbP-pCfQ-8lwy-mE9f-DG554V', 'scsi-0QEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4', 'scsi-SQEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432299 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--81820e8a--af8a--5909--b466--981a4bed2414-osd--block--81820e8a--af8a--5909--b466--981a4bed2414'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DJ9JLp-MG6W-EyaC-SY2P-58v9-0aUr-JN9DFN', 'scsi-0QEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346', 'scsi-SQEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432306 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6', 'scsi-SQEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432310 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:25:08.432314 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.432318 | orchestrator | 2025-07-12 20:25:08.432322 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-12 20:25:08.432326 | orchestrator | Saturday 12 July 2025 20:13:59 +0000 (0:00:02.170) 0:00:43.026 ********* 2025-07-12 20:25:08.432330 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.432334 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.432338 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.432359 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.432363 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.432367 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.432371 | orchestrator | 2025-07-12 20:25:08.432375 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-12 20:25:08.432379 | orchestrator | Saturday 12 July 2025 20:14:01 +0000 (0:00:02.186) 0:00:45.212 ********* 2025-07-12 20:25:08.432383 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.432386 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.432390 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.432394 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.432398 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.432402 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.432405 | orchestrator | 2025-07-12 20:25:08.432409 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 20:25:08.432428 | orchestrator | Saturday 12 July 2025 20:14:02 +0000 (0:00:00.994) 0:00:46.207 ********* 2025-07-12 20:25:08.432432 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.432436 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.432440 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.432447 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.432451 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.432455 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.432459 | orchestrator | 2025-07-12 20:25:08.432462 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 20:25:08.432466 | orchestrator | Saturday 12 July 2025 20:14:03 +0000 (0:00:00.988) 0:00:47.195 ********* 2025-07-12 20:25:08.432470 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.432474 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.432478 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.432482 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.432485 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.432489 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.432494 | orchestrator | 2025-07-12 20:25:08.432500 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 20:25:08.432797 | orchestrator | Saturday 12 July 2025 20:14:04 +0000 (0:00:00.714) 0:00:47.910 ********* 2025-07-12 20:25:08.432804 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.432811 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.432817 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.432824 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.432830 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.432834 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.432840 | orchestrator | 2025-07-12 20:25:08.432846 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 20:25:08.432853 | orchestrator | Saturday 12 July 2025 20:14:05 +0000 (0:00:01.426) 0:00:49.337 ********* 2025-07-12 20:25:08.432859 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.432867 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.432872 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.432879 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.432885 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.432891 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.432898 | orchestrator | 2025-07-12 20:25:08.432904 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-12 20:25:08.432911 | orchestrator | Saturday 12 July 2025 20:14:07 +0000 (0:00:01.258) 0:00:50.595 ********* 2025-07-12 20:25:08.432918 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:25:08.432924 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 20:25:08.432931 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-07-12 20:25:08.432937 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 20:25:08.432944 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-07-12 20:25:08.432951 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-07-12 20:25:08.432957 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-12 20:25:08.432963 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-07-12 20:25:08.432969 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-12 20:25:08.432976 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-07-12 20:25:08.432982 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-12 20:25:08.432989 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-12 20:25:08.432996 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-12 20:25:08.433002 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-07-12 20:25:08.433008 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-12 20:25:08.433015 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-12 20:25:08.433021 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-12 20:25:08.433027 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-12 20:25:08.433034 | orchestrator | 2025-07-12 20:25:08.433040 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-12 20:25:08.433054 | orchestrator | Saturday 12 July 2025 20:14:11 +0000 (0:00:04.265) 0:00:54.861 ********* 2025-07-12 20:25:08.433061 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:25:08.433068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:25:08.433075 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:25:08.433081 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.433087 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-07-12 20:25:08.433094 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-07-12 20:25:08.433100 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-07-12 20:25:08.433106 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.433113 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-07-12 20:25:08.433119 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-07-12 20:25:08.433125 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-07-12 20:25:08.433132 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.433138 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 20:25:08.433144 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 20:25:08.433151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 20:25:08.433157 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.433164 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 20:25:08.433170 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 20:25:08.433177 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 20:25:08.433183 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.433190 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 20:25:08.433196 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 20:25:08.433266 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 20:25:08.433274 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.433280 | orchestrator | 2025-07-12 20:25:08.433287 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-12 20:25:08.433293 | orchestrator | Saturday 12 July 2025 20:14:12 +0000 (0:00:00.645) 0:00:55.506 ********* 2025-07-12 20:25:08.433299 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.433306 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.433313 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.433320 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.433327 | orchestrator | 2025-07-12 20:25:08.433333 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 20:25:08.433376 | orchestrator | Saturday 12 July 2025 20:14:13 +0000 (0:00:01.252) 0:00:56.758 ********* 2025-07-12 20:25:08.433384 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.433390 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.433397 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.433412 | orchestrator | 2025-07-12 20:25:08.433419 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 20:25:08.433426 | orchestrator | Saturday 12 July 2025 20:14:14 +0000 (0:00:00.670) 0:00:57.429 ********* 2025-07-12 20:25:08.433432 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.433439 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.433445 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.433452 | orchestrator | 2025-07-12 20:25:08.433458 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 20:25:08.433465 | orchestrator | Saturday 12 July 2025 20:14:14 +0000 (0:00:00.781) 0:00:58.210 ********* 2025-07-12 20:25:08.433471 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.433486 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.433492 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.433498 | orchestrator | 2025-07-12 20:25:08.433505 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 20:25:08.433512 | orchestrator | Saturday 12 July 2025 20:14:15 +0000 (0:00:00.573) 0:00:58.784 ********* 2025-07-12 20:25:08.433518 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.433525 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.433599 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.433616 | orchestrator | 2025-07-12 20:25:08.433622 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 20:25:08.433628 | orchestrator | Saturday 12 July 2025 20:14:16 +0000 (0:00:01.181) 0:00:59.966 ********* 2025-07-12 20:25:08.433635 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.433641 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.433647 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.433653 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.433659 | orchestrator | 2025-07-12 20:25:08.433667 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 20:25:08.433671 | orchestrator | Saturday 12 July 2025 20:14:17 +0000 (0:00:00.768) 0:01:00.734 ********* 2025-07-12 20:25:08.433675 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.433678 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.433682 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.433686 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.433691 | orchestrator | 2025-07-12 20:25:08.433697 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 20:25:08.433704 | orchestrator | Saturday 12 July 2025 20:14:18 +0000 (0:00:00.936) 0:01:01.672 ********* 2025-07-12 20:25:08.433709 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.433715 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.433721 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.433728 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.433734 | orchestrator | 2025-07-12 20:25:08.433740 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 20:25:08.433747 | orchestrator | Saturday 12 July 2025 20:14:19 +0000 (0:00:01.371) 0:01:03.044 ********* 2025-07-12 20:25:08.433753 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.433759 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.433766 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.433770 | orchestrator | 2025-07-12 20:25:08.433773 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 20:25:08.433777 | orchestrator | Saturday 12 July 2025 20:14:20 +0000 (0:00:00.631) 0:01:03.676 ********* 2025-07-12 20:25:08.433781 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 20:25:08.433785 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 20:25:08.433791 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 20:25:08.433797 | orchestrator | 2025-07-12 20:25:08.433803 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-12 20:25:08.433809 | orchestrator | Saturday 12 July 2025 20:14:21 +0000 (0:00:00.859) 0:01:04.535 ********* 2025-07-12 20:25:08.433816 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:25:08.433821 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:25:08.433828 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:25:08.433835 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-07-12 20:25:08.433841 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 20:25:08.433847 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 20:25:08.433890 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 20:25:08.433898 | orchestrator | 2025-07-12 20:25:08.433905 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-12 20:25:08.433911 | orchestrator | Saturday 12 July 2025 20:14:22 +0000 (0:00:01.082) 0:01:05.617 ********* 2025-07-12 20:25:08.433918 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:25:08.433925 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:25:08.433932 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:25:08.433939 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-07-12 20:25:08.433945 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 20:25:08.433952 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 20:25:08.433958 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 20:25:08.433965 | orchestrator | 2025-07-12 20:25:08.433972 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:25:08.433978 | orchestrator | Saturday 12 July 2025 20:14:24 +0000 (0:00:02.452) 0:01:08.070 ********* 2025-07-12 20:25:08.433987 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.433995 | orchestrator | 2025-07-12 20:25:08.434001 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:25:08.434008 | orchestrator | Saturday 12 July 2025 20:14:26 +0000 (0:00:01.419) 0:01:09.489 ********* 2025-07-12 20:25:08.434055 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.434064 | orchestrator | 2025-07-12 20:25:08.434070 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:25:08.434077 | orchestrator | Saturday 12 July 2025 20:14:27 +0000 (0:00:01.606) 0:01:11.096 ********* 2025-07-12 20:25:08.434083 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.434089 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.434095 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.434101 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.434108 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.434114 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.434121 | orchestrator | 2025-07-12 20:25:08.434127 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:25:08.434134 | orchestrator | Saturday 12 July 2025 20:14:29 +0000 (0:00:01.999) 0:01:13.096 ********* 2025-07-12 20:25:08.434140 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434147 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434153 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434159 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.434166 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.434172 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.434178 | orchestrator | 2025-07-12 20:25:08.434185 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:25:08.434191 | orchestrator | Saturday 12 July 2025 20:14:31 +0000 (0:00:01.461) 0:01:14.558 ********* 2025-07-12 20:25:08.434198 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434204 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434210 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434216 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.434223 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.434229 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.434242 | orchestrator | 2025-07-12 20:25:08.434249 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:25:08.434255 | orchestrator | Saturday 12 July 2025 20:14:32 +0000 (0:00:01.428) 0:01:15.986 ********* 2025-07-12 20:25:08.434261 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434267 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434273 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434279 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.434286 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.434292 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.434298 | orchestrator | 2025-07-12 20:25:08.434304 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:25:08.434310 | orchestrator | Saturday 12 July 2025 20:14:33 +0000 (0:00:01.248) 0:01:17.235 ********* 2025-07-12 20:25:08.434316 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.434323 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.434329 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.434335 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.434379 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.434387 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.434393 | orchestrator | 2025-07-12 20:25:08.434399 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:25:08.434406 | orchestrator | Saturday 12 July 2025 20:14:34 +0000 (0:00:01.149) 0:01:18.384 ********* 2025-07-12 20:25:08.434412 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434418 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434424 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434430 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.434436 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.434442 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.434448 | orchestrator | 2025-07-12 20:25:08.434454 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:25:08.434460 | orchestrator | Saturday 12 July 2025 20:14:35 +0000 (0:00:00.780) 0:01:19.164 ********* 2025-07-12 20:25:08.434467 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434473 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434479 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434485 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.434495 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.434525 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.434532 | orchestrator | 2025-07-12 20:25:08.434538 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:25:08.434544 | orchestrator | Saturday 12 July 2025 20:14:37 +0000 (0:00:01.288) 0:01:20.453 ********* 2025-07-12 20:25:08.434551 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.434557 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.434563 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.434569 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.434576 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.434582 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.434588 | orchestrator | 2025-07-12 20:25:08.434594 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:25:08.434600 | orchestrator | Saturday 12 July 2025 20:14:38 +0000 (0:00:01.197) 0:01:21.651 ********* 2025-07-12 20:25:08.434606 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.434612 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.434618 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.434624 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.434630 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.434636 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.434643 | orchestrator | 2025-07-12 20:25:08.434649 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:25:08.434655 | orchestrator | Saturday 12 July 2025 20:14:39 +0000 (0:00:01.526) 0:01:23.177 ********* 2025-07-12 20:25:08.434661 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434672 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434678 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434684 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.434690 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.434697 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.434702 | orchestrator | 2025-07-12 20:25:08.434708 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:25:08.434715 | orchestrator | Saturday 12 July 2025 20:14:40 +0000 (0:00:00.694) 0:01:23.871 ********* 2025-07-12 20:25:08.434721 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.434727 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.434733 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.434739 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.434745 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.434751 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.434757 | orchestrator | 2025-07-12 20:25:08.434764 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:25:08.434770 | orchestrator | Saturday 12 July 2025 20:14:41 +0000 (0:00:00.894) 0:01:24.765 ********* 2025-07-12 20:25:08.434776 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434782 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434788 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434794 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.434800 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.434806 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.434812 | orchestrator | 2025-07-12 20:25:08.434818 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:25:08.434825 | orchestrator | Saturday 12 July 2025 20:14:41 +0000 (0:00:00.555) 0:01:25.321 ********* 2025-07-12 20:25:08.434832 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434835 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434839 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434843 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.434846 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.434850 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.434854 | orchestrator | 2025-07-12 20:25:08.434857 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:25:08.434861 | orchestrator | Saturday 12 July 2025 20:14:42 +0000 (0:00:00.763) 0:01:26.085 ********* 2025-07-12 20:25:08.434865 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434868 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434872 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434875 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.434879 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.434883 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.434886 | orchestrator | 2025-07-12 20:25:08.434890 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:25:08.434893 | orchestrator | Saturday 12 July 2025 20:14:43 +0000 (0:00:00.607) 0:01:26.693 ********* 2025-07-12 20:25:08.434897 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434901 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434904 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434908 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.434912 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.434915 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.434919 | orchestrator | 2025-07-12 20:25:08.434922 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:25:08.434926 | orchestrator | Saturday 12 July 2025 20:14:44 +0000 (0:00:00.800) 0:01:27.493 ********* 2025-07-12 20:25:08.434930 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.434933 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.434937 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.434940 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.434944 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.434951 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.434955 | orchestrator | 2025-07-12 20:25:08.434959 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:25:08.434962 | orchestrator | Saturday 12 July 2025 20:14:44 +0000 (0:00:00.571) 0:01:28.065 ********* 2025-07-12 20:25:08.434966 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.434970 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.434973 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.434977 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.434980 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.434984 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.434988 | orchestrator | 2025-07-12 20:25:08.434991 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:25:08.434995 | orchestrator | Saturday 12 July 2025 20:14:45 +0000 (0:00:00.707) 0:01:28.772 ********* 2025-07-12 20:25:08.434999 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.435002 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.435006 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.435009 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.435031 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.435035 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.435039 | orchestrator | 2025-07-12 20:25:08.435043 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:25:08.435046 | orchestrator | Saturday 12 July 2025 20:14:45 +0000 (0:00:00.524) 0:01:29.297 ********* 2025-07-12 20:25:08.435050 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.435054 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.435057 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.435061 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.435065 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.435068 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.435072 | orchestrator | 2025-07-12 20:25:08.435076 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-07-12 20:25:08.435079 | orchestrator | Saturday 12 July 2025 20:14:46 +0000 (0:00:01.063) 0:01:30.361 ********* 2025-07-12 20:25:08.435083 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.435087 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.435090 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.435094 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.435098 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.435101 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.435105 | orchestrator | 2025-07-12 20:25:08.435109 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-07-12 20:25:08.435112 | orchestrator | Saturday 12 July 2025 20:14:48 +0000 (0:00:01.399) 0:01:31.760 ********* 2025-07-12 20:25:08.435116 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.435120 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.435123 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.435127 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.435131 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.435134 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.435138 | orchestrator | 2025-07-12 20:25:08.435142 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-07-12 20:25:08.435145 | orchestrator | Saturday 12 July 2025 20:14:50 +0000 (0:00:01.795) 0:01:33.555 ********* 2025-07-12 20:25:08.435149 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.435153 | orchestrator | 2025-07-12 20:25:08.435157 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-07-12 20:25:08.435160 | orchestrator | Saturday 12 July 2025 20:14:51 +0000 (0:00:01.087) 0:01:34.642 ********* 2025-07-12 20:25:08.435164 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435168 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435175 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435178 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435182 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435186 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435189 | orchestrator | 2025-07-12 20:25:08.435193 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-07-12 20:25:08.435197 | orchestrator | Saturday 12 July 2025 20:14:51 +0000 (0:00:00.635) 0:01:35.278 ********* 2025-07-12 20:25:08.435200 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435204 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435208 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435211 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435215 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435219 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435222 | orchestrator | 2025-07-12 20:25:08.435226 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-07-12 20:25:08.435230 | orchestrator | Saturday 12 July 2025 20:14:52 +0000 (0:00:00.484) 0:01:35.762 ********* 2025-07-12 20:25:08.435233 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:25:08.435237 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:25:08.435241 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:25:08.435244 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:25:08.435248 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:25:08.435252 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:25:08.435255 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:25:08.435259 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:25:08.435263 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-07-12 20:25:08.435266 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:25:08.435270 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:25:08.435274 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-07-12 20:25:08.435277 | orchestrator | 2025-07-12 20:25:08.435281 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-07-12 20:25:08.435285 | orchestrator | Saturday 12 July 2025 20:14:53 +0000 (0:00:01.283) 0:01:37.046 ********* 2025-07-12 20:25:08.435288 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.435292 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.435296 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.435299 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.435303 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.435307 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.435310 | orchestrator | 2025-07-12 20:25:08.435314 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-07-12 20:25:08.435318 | orchestrator | Saturday 12 July 2025 20:14:54 +0000 (0:00:00.799) 0:01:37.845 ********* 2025-07-12 20:25:08.435324 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435339 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435359 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435366 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435372 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435378 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435383 | orchestrator | 2025-07-12 20:25:08.435390 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-07-12 20:25:08.435394 | orchestrator | Saturday 12 July 2025 20:14:55 +0000 (0:00:00.687) 0:01:38.533 ********* 2025-07-12 20:25:08.435403 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435406 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435410 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435414 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435417 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435421 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435424 | orchestrator | 2025-07-12 20:25:08.435428 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-07-12 20:25:08.435432 | orchestrator | Saturday 12 July 2025 20:14:55 +0000 (0:00:00.649) 0:01:39.183 ********* 2025-07-12 20:25:08.435435 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435439 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435442 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435446 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435450 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435453 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435457 | orchestrator | 2025-07-12 20:25:08.435461 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-07-12 20:25:08.435464 | orchestrator | Saturday 12 July 2025 20:14:56 +0000 (0:00:00.852) 0:01:40.035 ********* 2025-07-12 20:25:08.435468 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.435472 | orchestrator | 2025-07-12 20:25:08.435475 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-07-12 20:25:08.435479 | orchestrator | Saturday 12 July 2025 20:14:57 +0000 (0:00:01.339) 0:01:41.374 ********* 2025-07-12 20:25:08.435483 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.435486 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.435490 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.435494 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.435497 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.435501 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.435504 | orchestrator | 2025-07-12 20:25:08.435508 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-07-12 20:25:08.435512 | orchestrator | Saturday 12 July 2025 20:16:19 +0000 (0:01:21.423) 0:03:02.798 ********* 2025-07-12 20:25:08.435516 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:25:08.435519 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:25:08.435523 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:25:08.435526 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435530 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:25:08.435534 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:25:08.435537 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:25:08.435541 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435545 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:25:08.435548 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:25:08.435552 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:25:08.435556 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435559 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:25:08.435563 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:25:08.435567 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:25:08.435570 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435574 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:25:08.435584 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:25:08.435588 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:25:08.435591 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435595 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-07-12 20:25:08.435599 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-07-12 20:25:08.435602 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-07-12 20:25:08.435606 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435609 | orchestrator | 2025-07-12 20:25:08.435613 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-07-12 20:25:08.435617 | orchestrator | Saturday 12 July 2025 20:16:20 +0000 (0:00:01.081) 0:03:03.880 ********* 2025-07-12 20:25:08.435620 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435624 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435627 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435631 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435635 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435638 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435642 | orchestrator | 2025-07-12 20:25:08.435646 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-07-12 20:25:08.435665 | orchestrator | Saturday 12 July 2025 20:16:21 +0000 (0:00:00.643) 0:03:04.524 ********* 2025-07-12 20:25:08.435670 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435695 | orchestrator | 2025-07-12 20:25:08.435699 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-07-12 20:25:08.435702 | orchestrator | Saturday 12 July 2025 20:16:21 +0000 (0:00:00.189) 0:03:04.713 ********* 2025-07-12 20:25:08.435706 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435710 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435714 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435717 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435721 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435725 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435729 | orchestrator | 2025-07-12 20:25:08.435732 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-07-12 20:25:08.435736 | orchestrator | Saturday 12 July 2025 20:16:22 +0000 (0:00:01.033) 0:03:05.747 ********* 2025-07-12 20:25:08.435740 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435743 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435747 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435751 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435754 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435758 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435762 | orchestrator | 2025-07-12 20:25:08.435766 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-07-12 20:25:08.435769 | orchestrator | Saturday 12 July 2025 20:16:23 +0000 (0:00:00.683) 0:03:06.430 ********* 2025-07-12 20:25:08.435773 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435777 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435780 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435784 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435788 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435791 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435795 | orchestrator | 2025-07-12 20:25:08.435799 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-07-12 20:25:08.435803 | orchestrator | Saturday 12 July 2025 20:16:23 +0000 (0:00:00.826) 0:03:07.256 ********* 2025-07-12 20:25:08.435806 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.435810 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.435814 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.435821 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.435825 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.435829 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.435832 | orchestrator | 2025-07-12 20:25:08.435836 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-07-12 20:25:08.435840 | orchestrator | Saturday 12 July 2025 20:16:26 +0000 (0:00:03.048) 0:03:10.305 ********* 2025-07-12 20:25:08.435844 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.435847 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.435851 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.435855 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.435858 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.435862 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.435866 | orchestrator | 2025-07-12 20:25:08.435869 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-07-12 20:25:08.435873 | orchestrator | Saturday 12 July 2025 20:16:27 +0000 (0:00:00.900) 0:03:11.205 ********* 2025-07-12 20:25:08.435878 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.435901 | orchestrator | 2025-07-12 20:25:08.435905 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-07-12 20:25:08.435919 | orchestrator | Saturday 12 July 2025 20:16:29 +0000 (0:00:01.263) 0:03:12.469 ********* 2025-07-12 20:25:08.435923 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435927 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435930 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435934 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435938 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435942 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435945 | orchestrator | 2025-07-12 20:25:08.435949 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-07-12 20:25:08.435953 | orchestrator | Saturday 12 July 2025 20:16:29 +0000 (0:00:00.810) 0:03:13.279 ********* 2025-07-12 20:25:08.435957 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435960 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435964 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.435968 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.435972 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.435975 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.435979 | orchestrator | 2025-07-12 20:25:08.435983 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-07-12 20:25:08.435987 | orchestrator | Saturday 12 July 2025 20:16:30 +0000 (0:00:00.993) 0:03:14.273 ********* 2025-07-12 20:25:08.435990 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.435994 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.435998 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.436001 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.436005 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.436009 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.436013 | orchestrator | 2025-07-12 20:25:08.436017 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-07-12 20:25:08.436020 | orchestrator | Saturday 12 July 2025 20:16:31 +0000 (0:00:00.559) 0:03:14.833 ********* 2025-07-12 20:25:08.436024 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.436028 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.436031 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.436035 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.436039 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.436042 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.436046 | orchestrator | 2025-07-12 20:25:08.436072 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-07-12 20:25:08.436076 | orchestrator | Saturday 12 July 2025 20:16:32 +0000 (0:00:00.728) 0:03:15.561 ********* 2025-07-12 20:25:08.436086 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.436093 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.436111 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.436115 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.436119 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.436122 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.436126 | orchestrator | 2025-07-12 20:25:08.436142 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-07-12 20:25:08.436146 | orchestrator | Saturday 12 July 2025 20:16:32 +0000 (0:00:00.595) 0:03:16.157 ********* 2025-07-12 20:25:08.436150 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.436154 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.436158 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.436161 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.436165 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.436169 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.436172 | orchestrator | 2025-07-12 20:25:08.436176 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-07-12 20:25:08.436180 | orchestrator | Saturday 12 July 2025 20:16:33 +0000 (0:00:01.112) 0:03:17.270 ********* 2025-07-12 20:25:08.436183 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.436187 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.436191 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.436195 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.436198 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.436202 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.436205 | orchestrator | 2025-07-12 20:25:08.436209 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-07-12 20:25:08.436213 | orchestrator | Saturday 12 July 2025 20:16:34 +0000 (0:00:00.884) 0:03:18.155 ********* 2025-07-12 20:25:08.436217 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.436220 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.436224 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.436228 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.436232 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.436235 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.436239 | orchestrator | 2025-07-12 20:25:08.436243 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-07-12 20:25:08.436246 | orchestrator | Saturday 12 July 2025 20:16:35 +0000 (0:00:01.171) 0:03:19.326 ********* 2025-07-12 20:25:08.436250 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.436254 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.436257 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.436261 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.436265 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.436269 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.436272 | orchestrator | 2025-07-12 20:25:08.436276 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-07-12 20:25:08.436280 | orchestrator | Saturday 12 July 2025 20:16:37 +0000 (0:00:01.801) 0:03:21.128 ********* 2025-07-12 20:25:08.436284 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.436287 | orchestrator | 2025-07-12 20:25:08.436291 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-07-12 20:25:08.436295 | orchestrator | Saturday 12 July 2025 20:16:39 +0000 (0:00:01.503) 0:03:22.632 ********* 2025-07-12 20:25:08.436299 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-07-12 20:25:08.436302 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-07-12 20:25:08.436306 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-07-12 20:25:08.436310 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-07-12 20:25:08.436314 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-07-12 20:25:08.436322 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-07-12 20:25:08.436328 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-07-12 20:25:08.436334 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-07-12 20:25:08.436351 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-07-12 20:25:08.436357 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-07-12 20:25:08.436363 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-07-12 20:25:08.436369 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-07-12 20:25:08.436375 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-07-12 20:25:08.436380 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-07-12 20:25:08.436398 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-07-12 20:25:08.436404 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-07-12 20:25:08.436410 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-07-12 20:25:08.436416 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-07-12 20:25:08.436422 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-07-12 20:25:08.436426 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-07-12 20:25:08.436432 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-07-12 20:25:08.436438 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-07-12 20:25:08.436442 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-07-12 20:25:08.436447 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-07-12 20:25:08.436454 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-07-12 20:25:08.436460 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-07-12 20:25:08.436465 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-07-12 20:25:08.436471 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-07-12 20:25:08.436477 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-07-12 20:25:08.436483 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-07-12 20:25:08.436515 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-07-12 20:25:08.436523 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-07-12 20:25:08.436528 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-07-12 20:25:08.436531 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-07-12 20:25:08.436535 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-07-12 20:25:08.436539 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-07-12 20:25:08.436545 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-07-12 20:25:08.436551 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-07-12 20:25:08.436557 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-07-12 20:25:08.436563 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:25:08.436569 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-07-12 20:25:08.436575 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:25:08.436582 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:25:08.436588 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-07-12 20:25:08.436594 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:25:08.436601 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:25:08.436608 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-07-12 20:25:08.436612 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:25:08.436621 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:25:08.436624 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:25:08.436628 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:25:08.436632 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:25:08.436635 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-07-12 20:25:08.436641 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:25:08.436647 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:25:08.436653 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:25:08.436659 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:25:08.436665 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:25:08.436671 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-07-12 20:25:08.436677 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:25:08.436684 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:25:08.436690 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:25:08.436696 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:25:08.436702 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:25:08.436708 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-07-12 20:25:08.436714 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:25:08.436720 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:25:08.436727 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:25:08.436731 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:25:08.436735 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:25:08.436741 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-07-12 20:25:08.436747 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:25:08.436753 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:25:08.436759 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:25:08.436766 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:25:08.436772 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:25:08.436792 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:25:08.436799 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-07-12 20:25:08.436805 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:25:08.436811 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:25:08.436817 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-07-12 20:25:08.436824 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:25:08.436830 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-07-12 20:25:08.436836 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-07-12 20:25:08.436842 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:25:08.436848 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-07-12 20:25:08.436854 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-07-12 20:25:08.436884 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-07-12 20:25:08.436898 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-07-12 20:25:08.436904 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-07-12 20:25:08.436909 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-07-12 20:25:08.436915 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-07-12 20:25:08.436921 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-07-12 20:25:08.436926 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-07-12 20:25:08.436931 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-07-12 20:25:08.436936 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-07-12 20:25:08.436942 | orchestrator | 2025-07-12 20:25:08.436949 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-07-12 20:25:08.436955 | orchestrator | Saturday 12 July 2025 20:16:46 +0000 (0:00:07.016) 0:03:29.649 ********* 2025-07-12 20:25:08.436961 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.436967 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.436974 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.436981 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.436987 | orchestrator | 2025-07-12 20:25:08.436993 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-07-12 20:25:08.436999 | orchestrator | Saturday 12 July 2025 20:16:47 +0000 (0:00:01.155) 0:03:30.805 ********* 2025-07-12 20:25:08.437005 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.437012 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.437018 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.437025 | orchestrator | 2025-07-12 20:25:08.437032 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-07-12 20:25:08.437039 | orchestrator | Saturday 12 July 2025 20:16:48 +0000 (0:00:00.712) 0:03:31.518 ********* 2025-07-12 20:25:08.437065 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.437073 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.437080 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.437087 | orchestrator | 2025-07-12 20:25:08.437094 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-07-12 20:25:08.437100 | orchestrator | Saturday 12 July 2025 20:16:49 +0000 (0:00:01.434) 0:03:32.953 ********* 2025-07-12 20:25:08.437107 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437113 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437118 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437124 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.437130 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.437136 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.437143 | orchestrator | 2025-07-12 20:25:08.437149 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-07-12 20:25:08.437156 | orchestrator | Saturday 12 July 2025 20:16:50 +0000 (0:00:00.551) 0:03:33.504 ********* 2025-07-12 20:25:08.437163 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437169 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437176 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437183 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.437190 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.437203 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.437210 | orchestrator | 2025-07-12 20:25:08.437217 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-07-12 20:25:08.437223 | orchestrator | Saturday 12 July 2025 20:16:50 +0000 (0:00:00.759) 0:03:34.264 ********* 2025-07-12 20:25:08.437229 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437235 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437241 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437247 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.437252 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.437258 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.437263 | orchestrator | 2025-07-12 20:25:08.437269 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-07-12 20:25:08.437275 | orchestrator | Saturday 12 July 2025 20:16:51 +0000 (0:00:00.551) 0:03:34.815 ********* 2025-07-12 20:25:08.437281 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437287 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437293 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437299 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.437305 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.437311 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.437317 | orchestrator | 2025-07-12 20:25:08.437323 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-07-12 20:25:08.437329 | orchestrator | Saturday 12 July 2025 20:16:52 +0000 (0:00:00.919) 0:03:35.735 ********* 2025-07-12 20:25:08.437336 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437355 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437361 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437367 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.437373 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.437379 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.437385 | orchestrator | 2025-07-12 20:25:08.437446 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-07-12 20:25:08.437456 | orchestrator | Saturday 12 July 2025 20:16:53 +0000 (0:00:00.719) 0:03:36.454 ********* 2025-07-12 20:25:08.437462 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437469 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437475 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437481 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.437487 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.437493 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.437499 | orchestrator | 2025-07-12 20:25:08.437505 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-07-12 20:25:08.437511 | orchestrator | Saturday 12 July 2025 20:16:53 +0000 (0:00:00.920) 0:03:37.375 ********* 2025-07-12 20:25:08.437518 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437524 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437530 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437535 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.437541 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.437548 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.437554 | orchestrator | 2025-07-12 20:25:08.437560 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-07-12 20:25:08.437566 | orchestrator | Saturday 12 July 2025 20:16:54 +0000 (0:00:00.666) 0:03:38.042 ********* 2025-07-12 20:25:08.437573 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437579 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437585 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437591 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.437597 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.437603 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.437609 | orchestrator | 2025-07-12 20:25:08.437623 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-07-12 20:25:08.437629 | orchestrator | Saturday 12 July 2025 20:16:55 +0000 (0:00:00.997) 0:03:39.040 ********* 2025-07-12 20:25:08.437655 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437661 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437668 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437674 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.437680 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.437686 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.437692 | orchestrator | 2025-07-12 20:25:08.437699 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-07-12 20:25:08.437705 | orchestrator | Saturday 12 July 2025 20:16:58 +0000 (0:00:02.990) 0:03:42.030 ********* 2025-07-12 20:25:08.437711 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437718 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437724 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437730 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.437736 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.437743 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.437749 | orchestrator | 2025-07-12 20:25:08.437755 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-07-12 20:25:08.437762 | orchestrator | Saturday 12 July 2025 20:16:59 +0000 (0:00:01.064) 0:03:43.095 ********* 2025-07-12 20:25:08.437768 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437774 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437780 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437786 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.437793 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.437798 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.437804 | orchestrator | 2025-07-12 20:25:08.437810 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-07-12 20:25:08.437816 | orchestrator | Saturday 12 July 2025 20:17:00 +0000 (0:00:00.913) 0:03:44.009 ********* 2025-07-12 20:25:08.437822 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437828 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437834 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437841 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.437847 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.437854 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.437860 | orchestrator | 2025-07-12 20:25:08.437866 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-07-12 20:25:08.437872 | orchestrator | Saturday 12 July 2025 20:17:01 +0000 (0:00:00.909) 0:03:44.918 ********* 2025-07-12 20:25:08.437878 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437884 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437890 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437897 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.437904 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.437910 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.437917 | orchestrator | 2025-07-12 20:25:08.437924 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-07-12 20:25:08.437929 | orchestrator | Saturday 12 July 2025 20:17:02 +0000 (0:00:00.717) 0:03:45.636 ********* 2025-07-12 20:25:08.437935 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.437942 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.437949 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.437958 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-07-12 20:25:08.438012 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-07-12 20:25:08.438045 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-07-12 20:25:08.438052 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-07-12 20:25:08.438059 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.438065 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.438072 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-07-12 20:25:08.438078 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-07-12 20:25:08.438085 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.438091 | orchestrator | 2025-07-12 20:25:08.438097 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-07-12 20:25:08.438103 | orchestrator | Saturday 12 July 2025 20:17:03 +0000 (0:00:00.983) 0:03:46.620 ********* 2025-07-12 20:25:08.438109 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438115 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.438121 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.438127 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.438133 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.438140 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.438146 | orchestrator | 2025-07-12 20:25:08.438153 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-07-12 20:25:08.438159 | orchestrator | Saturday 12 July 2025 20:17:04 +0000 (0:00:00.788) 0:03:47.409 ********* 2025-07-12 20:25:08.438165 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438171 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.438177 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.438183 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.438190 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.438196 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.438202 | orchestrator | 2025-07-12 20:25:08.438208 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 20:25:08.438214 | orchestrator | Saturday 12 July 2025 20:17:04 +0000 (0:00:00.828) 0:03:48.237 ********* 2025-07-12 20:25:08.438238 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438245 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.438251 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.438257 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.438263 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.438270 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.438282 | orchestrator | 2025-07-12 20:25:08.438288 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 20:25:08.438294 | orchestrator | Saturday 12 July 2025 20:17:05 +0000 (0:00:00.670) 0:03:48.908 ********* 2025-07-12 20:25:08.438300 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438306 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.438311 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.438318 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.438324 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.438331 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.438337 | orchestrator | 2025-07-12 20:25:08.438384 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 20:25:08.438391 | orchestrator | Saturday 12 July 2025 20:17:06 +0000 (0:00:00.901) 0:03:49.810 ********* 2025-07-12 20:25:08.438397 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438404 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.438410 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.438416 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.438422 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.438429 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.438435 | orchestrator | 2025-07-12 20:25:08.438441 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 20:25:08.438447 | orchestrator | Saturday 12 July 2025 20:17:07 +0000 (0:00:00.811) 0:03:50.621 ********* 2025-07-12 20:25:08.438454 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438460 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.438465 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.438471 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.438477 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.438483 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.438490 | orchestrator | 2025-07-12 20:25:08.438496 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 20:25:08.438536 | orchestrator | Saturday 12 July 2025 20:17:08 +0000 (0:00:01.306) 0:03:51.928 ********* 2025-07-12 20:25:08.438542 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 20:25:08.438549 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 20:25:08.438555 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 20:25:08.438561 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438567 | orchestrator | 2025-07-12 20:25:08.438573 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 20:25:08.438579 | orchestrator | Saturday 12 July 2025 20:17:08 +0000 (0:00:00.458) 0:03:52.386 ********* 2025-07-12 20:25:08.438585 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 20:25:08.438592 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 20:25:08.438598 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 20:25:08.438604 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438610 | orchestrator | 2025-07-12 20:25:08.438617 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 20:25:08.438623 | orchestrator | Saturday 12 July 2025 20:17:09 +0000 (0:00:00.530) 0:03:52.917 ********* 2025-07-12 20:25:08.438629 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-07-12 20:25:08.438635 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-07-12 20:25:08.438641 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-07-12 20:25:08.438647 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438653 | orchestrator | 2025-07-12 20:25:08.438659 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 20:25:08.438665 | orchestrator | Saturday 12 July 2025 20:17:09 +0000 (0:00:00.447) 0:03:53.364 ********* 2025-07-12 20:25:08.438671 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438677 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.438689 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.438695 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.438701 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.438707 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.438713 | orchestrator | 2025-07-12 20:25:08.438719 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 20:25:08.438724 | orchestrator | Saturday 12 July 2025 20:17:10 +0000 (0:00:00.949) 0:03:54.314 ********* 2025-07-12 20:25:08.438730 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-07-12 20:25:08.438736 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-07-12 20:25:08.438741 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-07-12 20:25:08.438747 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.438752 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.438759 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.438766 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 20:25:08.438772 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 20:25:08.438778 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 20:25:08.438784 | orchestrator | 2025-07-12 20:25:08.438790 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-07-12 20:25:08.438795 | orchestrator | Saturday 12 July 2025 20:17:13 +0000 (0:00:02.479) 0:03:56.794 ********* 2025-07-12 20:25:08.438801 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.438806 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.438813 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.438819 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.438825 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.438831 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.438837 | orchestrator | 2025-07-12 20:25:08.438844 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:25:08.438850 | orchestrator | Saturday 12 July 2025 20:17:16 +0000 (0:00:03.227) 0:04:00.021 ********* 2025-07-12 20:25:08.438857 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.438864 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.438870 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.438876 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.438882 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.438887 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.438893 | orchestrator | 2025-07-12 20:25:08.438900 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-12 20:25:08.438906 | orchestrator | Saturday 12 July 2025 20:17:17 +0000 (0:00:01.292) 0:04:01.314 ********* 2025-07-12 20:25:08.438912 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.438918 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.438925 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.438931 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.438937 | orchestrator | 2025-07-12 20:25:08.438943 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-12 20:25:08.438949 | orchestrator | Saturday 12 July 2025 20:17:18 +0000 (0:00:00.927) 0:04:02.241 ********* 2025-07-12 20:25:08.438955 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.438962 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.438968 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.438974 | orchestrator | 2025-07-12 20:25:08.438980 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-12 20:25:08.438987 | orchestrator | Saturday 12 July 2025 20:17:19 +0000 (0:00:00.339) 0:04:02.581 ********* 2025-07-12 20:25:08.438993 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.438999 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.439005 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.439011 | orchestrator | 2025-07-12 20:25:08.439017 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-12 20:25:08.439035 | orchestrator | Saturday 12 July 2025 20:17:20 +0000 (0:00:01.554) 0:04:04.136 ********* 2025-07-12 20:25:08.439041 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:25:08.439048 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:25:08.439054 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:25:08.439084 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.439120 | orchestrator | 2025-07-12 20:25:08.439128 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-12 20:25:08.439134 | orchestrator | Saturday 12 July 2025 20:17:21 +0000 (0:00:00.642) 0:04:04.778 ********* 2025-07-12 20:25:08.439140 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.439147 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.439153 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.439159 | orchestrator | 2025-07-12 20:25:08.439165 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-12 20:25:08.439172 | orchestrator | Saturday 12 July 2025 20:17:21 +0000 (0:00:00.346) 0:04:05.125 ********* 2025-07-12 20:25:08.439178 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.439184 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.439190 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.439196 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.439202 | orchestrator | 2025-07-12 20:25:08.439208 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-12 20:25:08.439214 | orchestrator | Saturday 12 July 2025 20:17:22 +0000 (0:00:01.165) 0:04:06.290 ********* 2025-07-12 20:25:08.439220 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.439226 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.439232 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.439238 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439244 | orchestrator | 2025-07-12 20:25:08.439250 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-12 20:25:08.439256 | orchestrator | Saturday 12 July 2025 20:17:23 +0000 (0:00:00.354) 0:04:06.645 ********* 2025-07-12 20:25:08.439262 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439268 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.439274 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.439281 | orchestrator | 2025-07-12 20:25:08.439287 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-12 20:25:08.439293 | orchestrator | Saturday 12 July 2025 20:17:23 +0000 (0:00:00.286) 0:04:06.931 ********* 2025-07-12 20:25:08.439299 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439305 | orchestrator | 2025-07-12 20:25:08.439311 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-12 20:25:08.439318 | orchestrator | Saturday 12 July 2025 20:17:23 +0000 (0:00:00.198) 0:04:07.130 ********* 2025-07-12 20:25:08.439324 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439330 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.439336 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.439358 | orchestrator | 2025-07-12 20:25:08.439365 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-12 20:25:08.439372 | orchestrator | Saturday 12 July 2025 20:17:24 +0000 (0:00:00.288) 0:04:07.419 ********* 2025-07-12 20:25:08.439379 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439385 | orchestrator | 2025-07-12 20:25:08.439392 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-12 20:25:08.439398 | orchestrator | Saturday 12 July 2025 20:17:24 +0000 (0:00:00.212) 0:04:07.631 ********* 2025-07-12 20:25:08.439404 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439410 | orchestrator | 2025-07-12 20:25:08.439416 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-12 20:25:08.439429 | orchestrator | Saturday 12 July 2025 20:17:24 +0000 (0:00:00.215) 0:04:07.846 ********* 2025-07-12 20:25:08.439435 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439442 | orchestrator | 2025-07-12 20:25:08.439448 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-12 20:25:08.439454 | orchestrator | Saturday 12 July 2025 20:17:24 +0000 (0:00:00.280) 0:04:08.126 ********* 2025-07-12 20:25:08.439460 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439466 | orchestrator | 2025-07-12 20:25:08.439472 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-12 20:25:08.439478 | orchestrator | Saturday 12 July 2025 20:17:24 +0000 (0:00:00.198) 0:04:08.325 ********* 2025-07-12 20:25:08.439484 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439490 | orchestrator | 2025-07-12 20:25:08.439496 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-12 20:25:08.439502 | orchestrator | Saturday 12 July 2025 20:17:25 +0000 (0:00:00.209) 0:04:08.535 ********* 2025-07-12 20:25:08.439508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.439515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.439521 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.439528 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439534 | orchestrator | 2025-07-12 20:25:08.439540 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-12 20:25:08.439546 | orchestrator | Saturday 12 July 2025 20:17:25 +0000 (0:00:00.447) 0:04:08.982 ********* 2025-07-12 20:25:08.439552 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439559 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.439565 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.439571 | orchestrator | 2025-07-12 20:25:08.439577 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-12 20:25:08.439583 | orchestrator | Saturday 12 July 2025 20:17:25 +0000 (0:00:00.420) 0:04:09.403 ********* 2025-07-12 20:25:08.439590 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439596 | orchestrator | 2025-07-12 20:25:08.439602 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-12 20:25:08.439608 | orchestrator | Saturday 12 July 2025 20:17:26 +0000 (0:00:00.258) 0:04:09.662 ********* 2025-07-12 20:25:08.439615 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439621 | orchestrator | 2025-07-12 20:25:08.439627 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-12 20:25:08.439633 | orchestrator | Saturday 12 July 2025 20:17:26 +0000 (0:00:00.275) 0:04:09.937 ********* 2025-07-12 20:25:08.439644 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.439674 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.439682 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.439688 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.439694 | orchestrator | 2025-07-12 20:25:08.439699 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-12 20:25:08.439705 | orchestrator | Saturday 12 July 2025 20:17:27 +0000 (0:00:01.165) 0:04:11.103 ********* 2025-07-12 20:25:08.439711 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.439742 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.439748 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.439754 | orchestrator | 2025-07-12 20:25:08.439759 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-12 20:25:08.439765 | orchestrator | Saturday 12 July 2025 20:17:27 +0000 (0:00:00.283) 0:04:11.387 ********* 2025-07-12 20:25:08.439771 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.439776 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.439781 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.439787 | orchestrator | 2025-07-12 20:25:08.439792 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-12 20:25:08.439805 | orchestrator | Saturday 12 July 2025 20:17:29 +0000 (0:00:01.279) 0:04:12.666 ********* 2025-07-12 20:25:08.439811 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.439816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.439822 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.439828 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.439833 | orchestrator | 2025-07-12 20:25:08.439839 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-12 20:25:08.439845 | orchestrator | Saturday 12 July 2025 20:17:30 +0000 (0:00:00.877) 0:04:13.543 ********* 2025-07-12 20:25:08.439850 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.439856 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.439862 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.439869 | orchestrator | 2025-07-12 20:25:08.439874 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-12 20:25:08.439880 | orchestrator | Saturday 12 July 2025 20:17:30 +0000 (0:00:00.300) 0:04:13.843 ********* 2025-07-12 20:25:08.439886 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.439892 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.439898 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.439905 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.439911 | orchestrator | 2025-07-12 20:25:08.439917 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-12 20:25:08.439923 | orchestrator | Saturday 12 July 2025 20:17:31 +0000 (0:00:00.902) 0:04:14.746 ********* 2025-07-12 20:25:08.439929 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.439936 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.439942 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.439948 | orchestrator | 2025-07-12 20:25:08.439954 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-12 20:25:08.439960 | orchestrator | Saturday 12 July 2025 20:17:31 +0000 (0:00:00.339) 0:04:15.086 ********* 2025-07-12 20:25:08.439966 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.439972 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.439978 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.439984 | orchestrator | 2025-07-12 20:25:08.439991 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-12 20:25:08.439997 | orchestrator | Saturday 12 July 2025 20:17:32 +0000 (0:00:01.234) 0:04:16.320 ********* 2025-07-12 20:25:08.440003 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.440009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.440016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.440022 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.440028 | orchestrator | 2025-07-12 20:25:08.440034 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-12 20:25:08.440040 | orchestrator | Saturday 12 July 2025 20:17:33 +0000 (0:00:00.772) 0:04:17.093 ********* 2025-07-12 20:25:08.440046 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.440052 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.440058 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.440063 | orchestrator | 2025-07-12 20:25:08.440070 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-07-12 20:25:08.440076 | orchestrator | Saturday 12 July 2025 20:17:33 +0000 (0:00:00.303) 0:04:17.397 ********* 2025-07-12 20:25:08.440082 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440088 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440093 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440099 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.440106 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.440113 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.440118 | orchestrator | 2025-07-12 20:25:08.440131 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-12 20:25:08.440136 | orchestrator | Saturday 12 July 2025 20:17:34 +0000 (0:00:00.916) 0:04:18.313 ********* 2025-07-12 20:25:08.440140 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.440144 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.440147 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.440151 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.440155 | orchestrator | 2025-07-12 20:25:08.440158 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-12 20:25:08.440162 | orchestrator | Saturday 12 July 2025 20:17:35 +0000 (0:00:00.913) 0:04:19.227 ********* 2025-07-12 20:25:08.440166 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440170 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440173 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440177 | orchestrator | 2025-07-12 20:25:08.440185 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-12 20:25:08.440215 | orchestrator | Saturday 12 July 2025 20:17:36 +0000 (0:00:00.300) 0:04:19.527 ********* 2025-07-12 20:25:08.440219 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.440223 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.440227 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.440231 | orchestrator | 2025-07-12 20:25:08.440234 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-12 20:25:08.440238 | orchestrator | Saturday 12 July 2025 20:17:37 +0000 (0:00:01.106) 0:04:20.633 ********* 2025-07-12 20:25:08.440242 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:25:08.440246 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:25:08.440249 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:25:08.440253 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440257 | orchestrator | 2025-07-12 20:25:08.440260 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-12 20:25:08.440264 | orchestrator | Saturday 12 July 2025 20:17:37 +0000 (0:00:00.740) 0:04:21.374 ********* 2025-07-12 20:25:08.440268 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440271 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440275 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440279 | orchestrator | 2025-07-12 20:25:08.440283 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-07-12 20:25:08.440286 | orchestrator | 2025-07-12 20:25:08.440290 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:25:08.440294 | orchestrator | Saturday 12 July 2025 20:17:38 +0000 (0:00:00.695) 0:04:22.070 ********* 2025-07-12 20:25:08.440298 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.440303 | orchestrator | 2025-07-12 20:25:08.440307 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:25:08.440310 | orchestrator | Saturday 12 July 2025 20:17:39 +0000 (0:00:00.497) 0:04:22.568 ********* 2025-07-12 20:25:08.440314 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.440318 | orchestrator | 2025-07-12 20:25:08.440322 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:25:08.440326 | orchestrator | Saturday 12 July 2025 20:17:39 +0000 (0:00:00.644) 0:04:23.212 ********* 2025-07-12 20:25:08.440329 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440333 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440337 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440383 | orchestrator | 2025-07-12 20:25:08.440388 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:25:08.440392 | orchestrator | Saturday 12 July 2025 20:17:40 +0000 (0:00:00.747) 0:04:23.960 ********* 2025-07-12 20:25:08.440400 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440404 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440408 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440411 | orchestrator | 2025-07-12 20:25:08.440415 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:25:08.440421 | orchestrator | Saturday 12 July 2025 20:17:40 +0000 (0:00:00.309) 0:04:24.270 ********* 2025-07-12 20:25:08.440427 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440433 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440439 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440445 | orchestrator | 2025-07-12 20:25:08.440450 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:25:08.440456 | orchestrator | Saturday 12 July 2025 20:17:41 +0000 (0:00:00.300) 0:04:24.571 ********* 2025-07-12 20:25:08.440461 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440467 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440472 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440477 | orchestrator | 2025-07-12 20:25:08.440482 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:25:08.440488 | orchestrator | Saturday 12 July 2025 20:17:41 +0000 (0:00:00.461) 0:04:25.033 ********* 2025-07-12 20:25:08.440495 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440500 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440506 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440513 | orchestrator | 2025-07-12 20:25:08.440519 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:25:08.440525 | orchestrator | Saturday 12 July 2025 20:17:42 +0000 (0:00:00.704) 0:04:25.737 ********* 2025-07-12 20:25:08.440532 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440536 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440540 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440544 | orchestrator | 2025-07-12 20:25:08.440547 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:25:08.440551 | orchestrator | Saturday 12 July 2025 20:17:42 +0000 (0:00:00.258) 0:04:25.996 ********* 2025-07-12 20:25:08.440555 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440558 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440562 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440566 | orchestrator | 2025-07-12 20:25:08.440569 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:25:08.440573 | orchestrator | Saturday 12 July 2025 20:17:42 +0000 (0:00:00.265) 0:04:26.261 ********* 2025-07-12 20:25:08.440577 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440581 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440584 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440588 | orchestrator | 2025-07-12 20:25:08.440592 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:25:08.440595 | orchestrator | Saturday 12 July 2025 20:17:43 +0000 (0:00:00.963) 0:04:27.224 ********* 2025-07-12 20:25:08.440599 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440603 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440606 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440610 | orchestrator | 2025-07-12 20:25:08.440631 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:25:08.440661 | orchestrator | Saturday 12 July 2025 20:17:44 +0000 (0:00:00.670) 0:04:27.895 ********* 2025-07-12 20:25:08.440666 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440670 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440674 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440677 | orchestrator | 2025-07-12 20:25:08.440681 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:25:08.440685 | orchestrator | Saturday 12 July 2025 20:17:44 +0000 (0:00:00.339) 0:04:28.235 ********* 2025-07-12 20:25:08.440689 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440699 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440703 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440707 | orchestrator | 2025-07-12 20:25:08.440711 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:25:08.440714 | orchestrator | Saturday 12 July 2025 20:17:45 +0000 (0:00:00.305) 0:04:28.540 ********* 2025-07-12 20:25:08.440718 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440722 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440725 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440729 | orchestrator | 2025-07-12 20:25:08.440733 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:25:08.440736 | orchestrator | Saturday 12 July 2025 20:17:45 +0000 (0:00:00.482) 0:04:29.022 ********* 2025-07-12 20:25:08.440740 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440744 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440748 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440751 | orchestrator | 2025-07-12 20:25:08.440755 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:25:08.440761 | orchestrator | Saturday 12 July 2025 20:17:45 +0000 (0:00:00.298) 0:04:29.321 ********* 2025-07-12 20:25:08.440767 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440774 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440780 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440787 | orchestrator | 2025-07-12 20:25:08.440793 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:25:08.440799 | orchestrator | Saturday 12 July 2025 20:17:46 +0000 (0:00:00.286) 0:04:29.608 ********* 2025-07-12 20:25:08.440805 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440811 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440817 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440823 | orchestrator | 2025-07-12 20:25:08.440829 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:25:08.440836 | orchestrator | Saturday 12 July 2025 20:17:46 +0000 (0:00:00.281) 0:04:29.889 ********* 2025-07-12 20:25:08.440841 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.440847 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.440853 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.440860 | orchestrator | 2025-07-12 20:25:08.440866 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:25:08.440872 | orchestrator | Saturday 12 July 2025 20:17:46 +0000 (0:00:00.463) 0:04:30.353 ********* 2025-07-12 20:25:08.440879 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440885 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440891 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440898 | orchestrator | 2025-07-12 20:25:08.440904 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:25:08.440911 | orchestrator | Saturday 12 July 2025 20:17:47 +0000 (0:00:00.299) 0:04:30.653 ********* 2025-07-12 20:25:08.440918 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440923 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440927 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440931 | orchestrator | 2025-07-12 20:25:08.440934 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:25:08.440938 | orchestrator | Saturday 12 July 2025 20:17:47 +0000 (0:00:00.278) 0:04:30.932 ********* 2025-07-12 20:25:08.440942 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440945 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440964 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440968 | orchestrator | 2025-07-12 20:25:08.440972 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-07-12 20:25:08.440976 | orchestrator | Saturday 12 July 2025 20:17:48 +0000 (0:00:00.674) 0:04:31.606 ********* 2025-07-12 20:25:08.440980 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.440984 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.440987 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.440996 | orchestrator | 2025-07-12 20:25:08.441000 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-07-12 20:25:08.441004 | orchestrator | Saturday 12 July 2025 20:17:48 +0000 (0:00:00.330) 0:04:31.937 ********* 2025-07-12 20:25:08.441007 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.441011 | orchestrator | 2025-07-12 20:25:08.441015 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-07-12 20:25:08.441019 | orchestrator | Saturday 12 July 2025 20:17:49 +0000 (0:00:00.558) 0:04:32.495 ********* 2025-07-12 20:25:08.441022 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.441026 | orchestrator | 2025-07-12 20:25:08.441030 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-07-12 20:25:08.441033 | orchestrator | Saturday 12 July 2025 20:17:49 +0000 (0:00:00.123) 0:04:32.619 ********* 2025-07-12 20:25:08.441037 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-07-12 20:25:08.441041 | orchestrator | 2025-07-12 20:25:08.441045 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-07-12 20:25:08.441048 | orchestrator | Saturday 12 July 2025 20:17:50 +0000 (0:00:01.226) 0:04:33.845 ********* 2025-07-12 20:25:08.441052 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.441056 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.441059 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.441063 | orchestrator | 2025-07-12 20:25:08.441067 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-07-12 20:25:08.441070 | orchestrator | Saturday 12 July 2025 20:17:50 +0000 (0:00:00.302) 0:04:34.148 ********* 2025-07-12 20:25:08.441074 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.441078 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.441081 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.441085 | orchestrator | 2025-07-12 20:25:08.441093 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-07-12 20:25:08.441120 | orchestrator | Saturday 12 July 2025 20:17:51 +0000 (0:00:00.306) 0:04:34.455 ********* 2025-07-12 20:25:08.441125 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441129 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.441132 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.441136 | orchestrator | 2025-07-12 20:25:08.441140 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-07-12 20:25:08.441143 | orchestrator | Saturday 12 July 2025 20:17:52 +0000 (0:00:01.211) 0:04:35.666 ********* 2025-07-12 20:25:08.441147 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441151 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.441154 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.441158 | orchestrator | 2025-07-12 20:25:08.441162 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-07-12 20:25:08.441166 | orchestrator | Saturday 12 July 2025 20:17:53 +0000 (0:00:01.039) 0:04:36.706 ********* 2025-07-12 20:25:08.441169 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441173 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.441176 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.441180 | orchestrator | 2025-07-12 20:25:08.441184 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-07-12 20:25:08.441187 | orchestrator | Saturday 12 July 2025 20:17:53 +0000 (0:00:00.653) 0:04:37.359 ********* 2025-07-12 20:25:08.441191 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.441253 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.441259 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.441262 | orchestrator | 2025-07-12 20:25:08.441266 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-07-12 20:25:08.441270 | orchestrator | Saturday 12 July 2025 20:17:54 +0000 (0:00:00.741) 0:04:38.101 ********* 2025-07-12 20:25:08.441273 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441277 | orchestrator | 2025-07-12 20:25:08.441281 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-07-12 20:25:08.441289 | orchestrator | Saturday 12 July 2025 20:17:55 +0000 (0:00:01.304) 0:04:39.405 ********* 2025-07-12 20:25:08.441293 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.441296 | orchestrator | 2025-07-12 20:25:08.441300 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-07-12 20:25:08.441304 | orchestrator | Saturday 12 July 2025 20:17:56 +0000 (0:00:00.720) 0:04:40.126 ********* 2025-07-12 20:25:08.441307 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:25:08.441311 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.441315 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.441319 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:25:08.441323 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-07-12 20:25:08.441326 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:25:08.441330 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:25:08.441334 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-07-12 20:25:08.441337 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:25:08.441379 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-07-12 20:25:08.441384 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-07-12 20:25:08.441387 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-07-12 20:25:08.441391 | orchestrator | 2025-07-12 20:25:08.441395 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-07-12 20:25:08.441399 | orchestrator | Saturday 12 July 2025 20:17:59 +0000 (0:00:03.201) 0:04:43.328 ********* 2025-07-12 20:25:08.441402 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441406 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.441410 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.441413 | orchestrator | 2025-07-12 20:25:08.441417 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-07-12 20:25:08.441421 | orchestrator | Saturday 12 July 2025 20:18:01 +0000 (0:00:01.361) 0:04:44.689 ********* 2025-07-12 20:25:08.441425 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.441429 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.441432 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.441436 | orchestrator | 2025-07-12 20:25:08.441440 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-07-12 20:25:08.441443 | orchestrator | Saturday 12 July 2025 20:18:01 +0000 (0:00:00.295) 0:04:44.985 ********* 2025-07-12 20:25:08.441447 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.441451 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.441455 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.441458 | orchestrator | 2025-07-12 20:25:08.441462 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-07-12 20:25:08.441466 | orchestrator | Saturday 12 July 2025 20:18:01 +0000 (0:00:00.294) 0:04:45.279 ********* 2025-07-12 20:25:08.441470 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441473 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.441477 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.441481 | orchestrator | 2025-07-12 20:25:08.441484 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-07-12 20:25:08.441488 | orchestrator | Saturday 12 July 2025 20:18:03 +0000 (0:00:01.746) 0:04:47.026 ********* 2025-07-12 20:25:08.441492 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441497 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.441501 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.441505 | orchestrator | 2025-07-12 20:25:08.441510 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-07-12 20:25:08.441514 | orchestrator | Saturday 12 July 2025 20:18:05 +0000 (0:00:01.628) 0:04:48.654 ********* 2025-07-12 20:25:08.441522 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.441526 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.441531 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.441535 | orchestrator | 2025-07-12 20:25:08.441540 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-07-12 20:25:08.441566 | orchestrator | Saturday 12 July 2025 20:18:05 +0000 (0:00:00.324) 0:04:48.979 ********* 2025-07-12 20:25:08.441571 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.441575 | orchestrator | 2025-07-12 20:25:08.441580 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-07-12 20:25:08.441584 | orchestrator | Saturday 12 July 2025 20:18:06 +0000 (0:00:00.569) 0:04:49.549 ********* 2025-07-12 20:25:08.441588 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.441592 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.441597 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.441601 | orchestrator | 2025-07-12 20:25:08.441605 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-07-12 20:25:08.441609 | orchestrator | Saturday 12 July 2025 20:18:06 +0000 (0:00:00.583) 0:04:50.132 ********* 2025-07-12 20:25:08.441614 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.441618 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.441622 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.441626 | orchestrator | 2025-07-12 20:25:08.441630 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-07-12 20:25:08.441635 | orchestrator | Saturday 12 July 2025 20:18:07 +0000 (0:00:00.323) 0:04:50.456 ********* 2025-07-12 20:25:08.441639 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.441644 | orchestrator | 2025-07-12 20:25:08.441648 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-07-12 20:25:08.441652 | orchestrator | Saturday 12 July 2025 20:18:07 +0000 (0:00:00.548) 0:04:51.004 ********* 2025-07-12 20:25:08.441656 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441660 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.441665 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.441669 | orchestrator | 2025-07-12 20:25:08.441673 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-07-12 20:25:08.441678 | orchestrator | Saturday 12 July 2025 20:18:09 +0000 (0:00:01.754) 0:04:52.759 ********* 2025-07-12 20:25:08.441682 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441686 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.441690 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.441694 | orchestrator | 2025-07-12 20:25:08.441699 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-07-12 20:25:08.441703 | orchestrator | Saturday 12 July 2025 20:18:10 +0000 (0:00:01.170) 0:04:53.930 ********* 2025-07-12 20:25:08.441708 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441712 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.441716 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.441720 | orchestrator | 2025-07-12 20:25:08.441724 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-07-12 20:25:08.441729 | orchestrator | Saturday 12 July 2025 20:18:12 +0000 (0:00:01.647) 0:04:55.577 ********* 2025-07-12 20:25:08.441733 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.441737 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.441741 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.441745 | orchestrator | 2025-07-12 20:25:08.441750 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-07-12 20:25:08.441754 | orchestrator | Saturday 12 July 2025 20:18:14 +0000 (0:00:02.221) 0:04:57.798 ********* 2025-07-12 20:25:08.441758 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.441763 | orchestrator | 2025-07-12 20:25:08.441774 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-07-12 20:25:08.441778 | orchestrator | Saturday 12 July 2025 20:18:15 +0000 (0:00:00.973) 0:04:58.772 ********* 2025-07-12 20:25:08.441782 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-07-12 20:25:08.441787 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.441791 | orchestrator | 2025-07-12 20:25:08.441795 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-07-12 20:25:08.441800 | orchestrator | Saturday 12 July 2025 20:18:37 +0000 (0:00:21.855) 0:05:20.628 ********* 2025-07-12 20:25:08.441804 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.441808 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.441812 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.441816 | orchestrator | 2025-07-12 20:25:08.441821 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-07-12 20:25:08.441825 | orchestrator | Saturday 12 July 2025 20:18:47 +0000 (0:00:10.715) 0:05:31.343 ********* 2025-07-12 20:25:08.441829 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.441834 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.441838 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.441842 | orchestrator | 2025-07-12 20:25:08.441846 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-07-12 20:25:08.441851 | orchestrator | Saturday 12 July 2025 20:18:48 +0000 (0:00:00.609) 0:05:31.953 ********* 2025-07-12 20:25:08.441857 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bfc2b7141cc3d9f7299c95cde8416caa1f67a079'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-07-12 20:25:08.441879 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bfc2b7141cc3d9f7299c95cde8416caa1f67a079'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-07-12 20:25:08.441885 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bfc2b7141cc3d9f7299c95cde8416caa1f67a079'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-07-12 20:25:08.441891 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bfc2b7141cc3d9f7299c95cde8416caa1f67a079'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-07-12 20:25:08.441896 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bfc2b7141cc3d9f7299c95cde8416caa1f67a079'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-07-12 20:25:08.441901 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__bfc2b7141cc3d9f7299c95cde8416caa1f67a079'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__bfc2b7141cc3d9f7299c95cde8416caa1f67a079'}])  2025-07-12 20:25:08.441909 | orchestrator | 2025-07-12 20:25:08.441913 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:25:08.441917 | orchestrator | Saturday 12 July 2025 20:19:02 +0000 (0:00:14.423) 0:05:46.377 ********* 2025-07-12 20:25:08.441920 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.441924 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.441928 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.441931 | orchestrator | 2025-07-12 20:25:08.441935 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-07-12 20:25:08.441939 | orchestrator | Saturday 12 July 2025 20:19:03 +0000 (0:00:00.294) 0:05:46.671 ********* 2025-07-12 20:25:08.441942 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.441946 | orchestrator | 2025-07-12 20:25:08.441950 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-07-12 20:25:08.441953 | orchestrator | Saturday 12 July 2025 20:19:03 +0000 (0:00:00.727) 0:05:47.399 ********* 2025-07-12 20:25:08.441957 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.441961 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.441964 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.441968 | orchestrator | 2025-07-12 20:25:08.441972 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-07-12 20:25:08.441975 | orchestrator | Saturday 12 July 2025 20:19:04 +0000 (0:00:00.377) 0:05:47.776 ********* 2025-07-12 20:25:08.441979 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.441983 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.441986 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.441990 | orchestrator | 2025-07-12 20:25:08.441994 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-07-12 20:25:08.441998 | orchestrator | Saturday 12 July 2025 20:19:04 +0000 (0:00:00.350) 0:05:48.126 ********* 2025-07-12 20:25:08.442001 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:25:08.442005 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:25:08.442009 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:25:08.442148 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442165 | orchestrator | 2025-07-12 20:25:08.442171 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-07-12 20:25:08.442177 | orchestrator | Saturday 12 July 2025 20:19:05 +0000 (0:00:01.013) 0:05:49.140 ********* 2025-07-12 20:25:08.442183 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.442189 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.442195 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.442201 | orchestrator | 2025-07-12 20:25:08.442207 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-07-12 20:25:08.442213 | orchestrator | 2025-07-12 20:25:08.442219 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:25:08.442226 | orchestrator | Saturday 12 July 2025 20:19:06 +0000 (0:00:00.905) 0:05:50.046 ********* 2025-07-12 20:25:08.442232 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.442240 | orchestrator | 2025-07-12 20:25:08.442246 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:25:08.442250 | orchestrator | Saturday 12 July 2025 20:19:07 +0000 (0:00:00.507) 0:05:50.554 ********* 2025-07-12 20:25:08.442254 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.442258 | orchestrator | 2025-07-12 20:25:08.442285 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:25:08.442290 | orchestrator | Saturday 12 July 2025 20:19:07 +0000 (0:00:00.821) 0:05:51.375 ********* 2025-07-12 20:25:08.442295 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.442299 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.442307 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.442311 | orchestrator | 2025-07-12 20:25:08.442315 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:25:08.442318 | orchestrator | Saturday 12 July 2025 20:19:08 +0000 (0:00:00.735) 0:05:52.111 ********* 2025-07-12 20:25:08.442322 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442328 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442334 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442355 | orchestrator | 2025-07-12 20:25:08.442361 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:25:08.442368 | orchestrator | Saturday 12 July 2025 20:19:09 +0000 (0:00:00.347) 0:05:52.459 ********* 2025-07-12 20:25:08.442374 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442380 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442386 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442393 | orchestrator | 2025-07-12 20:25:08.442399 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:25:08.442405 | orchestrator | Saturday 12 July 2025 20:19:09 +0000 (0:00:00.633) 0:05:53.092 ********* 2025-07-12 20:25:08.442410 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442416 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442422 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442428 | orchestrator | 2025-07-12 20:25:08.442434 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:25:08.442441 | orchestrator | Saturday 12 July 2025 20:19:10 +0000 (0:00:00.416) 0:05:53.509 ********* 2025-07-12 20:25:08.442447 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.442453 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.442459 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.442465 | orchestrator | 2025-07-12 20:25:08.442471 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:25:08.442477 | orchestrator | Saturday 12 July 2025 20:19:10 +0000 (0:00:00.856) 0:05:54.365 ********* 2025-07-12 20:25:08.442484 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442490 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442496 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442503 | orchestrator | 2025-07-12 20:25:08.442509 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:25:08.442515 | orchestrator | Saturday 12 July 2025 20:19:11 +0000 (0:00:00.371) 0:05:54.737 ********* 2025-07-12 20:25:08.442521 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442527 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442532 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442538 | orchestrator | 2025-07-12 20:25:08.442545 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:25:08.442551 | orchestrator | Saturday 12 July 2025 20:19:12 +0000 (0:00:00.678) 0:05:55.415 ********* 2025-07-12 20:25:08.442557 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.442563 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.442569 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.442574 | orchestrator | 2025-07-12 20:25:08.442580 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:25:08.442586 | orchestrator | Saturday 12 July 2025 20:19:12 +0000 (0:00:00.824) 0:05:56.240 ********* 2025-07-12 20:25:08.442592 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.442599 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.442605 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.442611 | orchestrator | 2025-07-12 20:25:08.442617 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:25:08.442623 | orchestrator | Saturday 12 July 2025 20:19:13 +0000 (0:00:00.803) 0:05:57.044 ********* 2025-07-12 20:25:08.442629 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442635 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442641 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442647 | orchestrator | 2025-07-12 20:25:08.442659 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:25:08.442666 | orchestrator | Saturday 12 July 2025 20:19:13 +0000 (0:00:00.302) 0:05:57.346 ********* 2025-07-12 20:25:08.442672 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.442678 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.442684 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.442690 | orchestrator | 2025-07-12 20:25:08.442697 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:25:08.442703 | orchestrator | Saturday 12 July 2025 20:19:14 +0000 (0:00:00.602) 0:05:57.949 ********* 2025-07-12 20:25:08.442710 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442716 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442722 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442729 | orchestrator | 2025-07-12 20:25:08.442735 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:25:08.442741 | orchestrator | Saturday 12 July 2025 20:19:14 +0000 (0:00:00.340) 0:05:58.289 ********* 2025-07-12 20:25:08.442747 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442754 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442761 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442767 | orchestrator | 2025-07-12 20:25:08.442773 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:25:08.442780 | orchestrator | Saturday 12 July 2025 20:19:15 +0000 (0:00:00.328) 0:05:58.618 ********* 2025-07-12 20:25:08.442786 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442792 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442798 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442804 | orchestrator | 2025-07-12 20:25:08.442810 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:25:08.442816 | orchestrator | Saturday 12 July 2025 20:19:15 +0000 (0:00:00.398) 0:05:59.017 ********* 2025-07-12 20:25:08.442822 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442828 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442834 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442841 | orchestrator | 2025-07-12 20:25:08.442851 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:25:08.442939 | orchestrator | Saturday 12 July 2025 20:19:16 +0000 (0:00:00.631) 0:05:59.648 ********* 2025-07-12 20:25:08.442948 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.442953 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.442959 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.442966 | orchestrator | 2025-07-12 20:25:08.442973 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:25:08.442979 | orchestrator | Saturday 12 July 2025 20:19:16 +0000 (0:00:00.325) 0:05:59.974 ********* 2025-07-12 20:25:08.442985 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.442991 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.442996 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.443002 | orchestrator | 2025-07-12 20:25:08.443008 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:25:08.443014 | orchestrator | Saturday 12 July 2025 20:19:16 +0000 (0:00:00.337) 0:06:00.311 ********* 2025-07-12 20:25:08.443020 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.443026 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.443033 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.443040 | orchestrator | 2025-07-12 20:25:08.443046 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:25:08.443053 | orchestrator | Saturday 12 July 2025 20:19:17 +0000 (0:00:00.347) 0:06:00.658 ********* 2025-07-12 20:25:08.443059 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.443065 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.443071 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.443077 | orchestrator | 2025-07-12 20:25:08.443084 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-07-12 20:25:08.443096 | orchestrator | Saturday 12 July 2025 20:19:18 +0000 (0:00:00.806) 0:06:01.465 ********* 2025-07-12 20:25:08.443102 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:25:08.443109 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:25:08.443115 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:25:08.443121 | orchestrator | 2025-07-12 20:25:08.443127 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-07-12 20:25:08.443133 | orchestrator | Saturday 12 July 2025 20:19:18 +0000 (0:00:00.643) 0:06:02.108 ********* 2025-07-12 20:25:08.443139 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.443146 | orchestrator | 2025-07-12 20:25:08.443153 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-07-12 20:25:08.443159 | orchestrator | Saturday 12 July 2025 20:19:19 +0000 (0:00:00.557) 0:06:02.665 ********* 2025-07-12 20:25:08.443203 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.443210 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.443216 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.443222 | orchestrator | 2025-07-12 20:25:08.443228 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-07-12 20:25:08.443235 | orchestrator | Saturday 12 July 2025 20:19:20 +0000 (0:00:00.921) 0:06:03.587 ********* 2025-07-12 20:25:08.443241 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.443248 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.443254 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.443260 | orchestrator | 2025-07-12 20:25:08.443266 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-07-12 20:25:08.443272 | orchestrator | Saturday 12 July 2025 20:19:20 +0000 (0:00:00.386) 0:06:03.973 ********* 2025-07-12 20:25:08.443279 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:25:08.443286 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:25:08.443292 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:25:08.443298 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-07-12 20:25:08.443304 | orchestrator | 2025-07-12 20:25:08.443310 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-07-12 20:25:08.443316 | orchestrator | Saturday 12 July 2025 20:19:30 +0000 (0:00:10.324) 0:06:14.298 ********* 2025-07-12 20:25:08.443322 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.443328 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.443335 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.443550 | orchestrator | 2025-07-12 20:25:08.443612 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-07-12 20:25:08.443618 | orchestrator | Saturday 12 July 2025 20:19:31 +0000 (0:00:00.437) 0:06:14.735 ********* 2025-07-12 20:25:08.443625 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 20:25:08.443631 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 20:25:08.443637 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 20:25:08.443644 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-12 20:25:08.443650 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.443656 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.443662 | orchestrator | 2025-07-12 20:25:08.443668 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-07-12 20:25:08.443675 | orchestrator | Saturday 12 July 2025 20:19:34 +0000 (0:00:03.074) 0:06:17.810 ********* 2025-07-12 20:25:08.443681 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 20:25:08.443687 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 20:25:08.443693 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 20:25:08.443700 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:25:08.443714 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-07-12 20:25:08.443720 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-07-12 20:25:08.443726 | orchestrator | 2025-07-12 20:25:08.443732 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-07-12 20:25:08.443739 | orchestrator | Saturday 12 July 2025 20:19:35 +0000 (0:00:01.313) 0:06:19.124 ********* 2025-07-12 20:25:08.443745 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.443757 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.443908 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.443919 | orchestrator | 2025-07-12 20:25:08.443926 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-07-12 20:25:08.443932 | orchestrator | Saturday 12 July 2025 20:19:36 +0000 (0:00:00.714) 0:06:19.839 ********* 2025-07-12 20:25:08.443938 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.443944 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.443950 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.443957 | orchestrator | 2025-07-12 20:25:08.443963 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-07-12 20:25:08.443969 | orchestrator | Saturday 12 July 2025 20:19:36 +0000 (0:00:00.415) 0:06:20.254 ********* 2025-07-12 20:25:08.443976 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.443982 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.443989 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.443995 | orchestrator | 2025-07-12 20:25:08.444001 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-07-12 20:25:08.444007 | orchestrator | Saturday 12 July 2025 20:19:37 +0000 (0:00:00.642) 0:06:20.896 ********* 2025-07-12 20:25:08.444014 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.444020 | orchestrator | 2025-07-12 20:25:08.444027 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-07-12 20:25:08.444033 | orchestrator | Saturday 12 July 2025 20:19:38 +0000 (0:00:00.720) 0:06:21.617 ********* 2025-07-12 20:25:08.444039 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.444045 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.444051 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.444057 | orchestrator | 2025-07-12 20:25:08.444063 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-07-12 20:25:08.444070 | orchestrator | Saturday 12 July 2025 20:19:38 +0000 (0:00:00.353) 0:06:21.970 ********* 2025-07-12 20:25:08.444076 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.444082 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.444088 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.444093 | orchestrator | 2025-07-12 20:25:08.444100 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-07-12 20:25:08.444106 | orchestrator | Saturday 12 July 2025 20:19:38 +0000 (0:00:00.377) 0:06:22.347 ********* 2025-07-12 20:25:08.444112 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.444119 | orchestrator | 2025-07-12 20:25:08.444125 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-07-12 20:25:08.444131 | orchestrator | Saturday 12 July 2025 20:19:39 +0000 (0:00:00.843) 0:06:23.191 ********* 2025-07-12 20:25:08.444137 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.444143 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.444149 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.444156 | orchestrator | 2025-07-12 20:25:08.444161 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-07-12 20:25:08.444168 | orchestrator | Saturday 12 July 2025 20:19:41 +0000 (0:00:01.319) 0:06:24.511 ********* 2025-07-12 20:25:08.444174 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.444180 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.444187 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.444200 | orchestrator | 2025-07-12 20:25:08.444206 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-07-12 20:25:08.444212 | orchestrator | Saturday 12 July 2025 20:19:42 +0000 (0:00:01.198) 0:06:25.709 ********* 2025-07-12 20:25:08.444218 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.444225 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.444231 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.444237 | orchestrator | 2025-07-12 20:25:08.444243 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-07-12 20:25:08.444249 | orchestrator | Saturday 12 July 2025 20:19:44 +0000 (0:00:01.972) 0:06:27.681 ********* 2025-07-12 20:25:08.444256 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.444262 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.444268 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.444274 | orchestrator | 2025-07-12 20:25:08.444280 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-07-12 20:25:08.444286 | orchestrator | Saturday 12 July 2025 20:19:46 +0000 (0:00:01.875) 0:06:29.557 ********* 2025-07-12 20:25:08.444293 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.444299 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.444305 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-07-12 20:25:08.444311 | orchestrator | 2025-07-12 20:25:08.444317 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-07-12 20:25:08.444323 | orchestrator | Saturday 12 July 2025 20:19:46 +0000 (0:00:00.404) 0:06:29.962 ********* 2025-07-12 20:25:08.444329 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-07-12 20:25:08.444336 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-07-12 20:25:08.444356 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-07-12 20:25:08.444363 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-07-12 20:25:08.444369 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-07-12 20:25:08.444375 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:25:08.444382 | orchestrator | 2025-07-12 20:25:08.444388 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-07-12 20:25:08.444395 | orchestrator | Saturday 12 July 2025 20:20:17 +0000 (0:00:30.554) 0:07:00.516 ********* 2025-07-12 20:25:08.444435 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:25:08.444440 | orchestrator | 2025-07-12 20:25:08.444444 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-07-12 20:25:08.444448 | orchestrator | Saturday 12 July 2025 20:20:18 +0000 (0:00:01.453) 0:07:01.969 ********* 2025-07-12 20:25:08.444451 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.444455 | orchestrator | 2025-07-12 20:25:08.444459 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-07-12 20:25:08.444463 | orchestrator | Saturday 12 July 2025 20:20:19 +0000 (0:00:00.762) 0:07:02.732 ********* 2025-07-12 20:25:08.444466 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.444470 | orchestrator | 2025-07-12 20:25:08.444474 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-07-12 20:25:08.444477 | orchestrator | Saturday 12 July 2025 20:20:19 +0000 (0:00:00.122) 0:07:02.855 ********* 2025-07-12 20:25:08.444481 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-07-12 20:25:08.444485 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-07-12 20:25:08.444488 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-07-12 20:25:08.444492 | orchestrator | 2025-07-12 20:25:08.444496 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-07-12 20:25:08.444504 | orchestrator | Saturday 12 July 2025 20:20:25 +0000 (0:00:06.306) 0:07:09.161 ********* 2025-07-12 20:25:08.444508 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-07-12 20:25:08.444513 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-07-12 20:25:08.444519 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-07-12 20:25:08.444525 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-07-12 20:25:08.444531 | orchestrator | 2025-07-12 20:25:08.444537 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:25:08.444543 | orchestrator | Saturday 12 July 2025 20:20:30 +0000 (0:00:04.562) 0:07:13.724 ********* 2025-07-12 20:25:08.444549 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.444556 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.444562 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.444568 | orchestrator | 2025-07-12 20:25:08.444574 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-07-12 20:25:08.444579 | orchestrator | Saturday 12 July 2025 20:20:31 +0000 (0:00:01.000) 0:07:14.724 ********* 2025-07-12 20:25:08.444585 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:25:08.444591 | orchestrator | 2025-07-12 20:25:08.444597 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-07-12 20:25:08.444600 | orchestrator | Saturday 12 July 2025 20:20:31 +0000 (0:00:00.588) 0:07:15.313 ********* 2025-07-12 20:25:08.444604 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.444611 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.444617 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.444623 | orchestrator | 2025-07-12 20:25:08.444628 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-07-12 20:25:08.444635 | orchestrator | Saturday 12 July 2025 20:20:32 +0000 (0:00:00.355) 0:07:15.668 ********* 2025-07-12 20:25:08.444641 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.444647 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.444653 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.444660 | orchestrator | 2025-07-12 20:25:08.444666 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-07-12 20:25:08.444672 | orchestrator | Saturday 12 July 2025 20:20:33 +0000 (0:00:01.495) 0:07:17.164 ********* 2025-07-12 20:25:08.444678 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-07-12 20:25:08.444683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-07-12 20:25:08.444687 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-07-12 20:25:08.444691 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.444694 | orchestrator | 2025-07-12 20:25:08.444698 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-07-12 20:25:08.444702 | orchestrator | Saturday 12 July 2025 20:20:34 +0000 (0:00:00.649) 0:07:17.813 ********* 2025-07-12 20:25:08.444708 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.444715 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.444721 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.444727 | orchestrator | 2025-07-12 20:25:08.444733 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-07-12 20:25:08.444739 | orchestrator | 2025-07-12 20:25:08.444745 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:25:08.444751 | orchestrator | Saturday 12 July 2025 20:20:34 +0000 (0:00:00.552) 0:07:18.365 ********* 2025-07-12 20:25:08.444757 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.444764 | orchestrator | 2025-07-12 20:25:08.444770 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:25:08.444777 | orchestrator | Saturday 12 July 2025 20:20:35 +0000 (0:00:00.731) 0:07:19.096 ********* 2025-07-12 20:25:08.444788 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.444795 | orchestrator | 2025-07-12 20:25:08.444800 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:25:08.444806 | orchestrator | Saturday 12 July 2025 20:20:36 +0000 (0:00:00.545) 0:07:19.641 ********* 2025-07-12 20:25:08.444812 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.444818 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.444824 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.444831 | orchestrator | 2025-07-12 20:25:08.444841 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:25:08.444872 | orchestrator | Saturday 12 July 2025 20:20:36 +0000 (0:00:00.309) 0:07:19.951 ********* 2025-07-12 20:25:08.444879 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.444885 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.444891 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.444897 | orchestrator | 2025-07-12 20:25:08.444904 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:25:08.444910 | orchestrator | Saturday 12 July 2025 20:20:37 +0000 (0:00:00.942) 0:07:20.894 ********* 2025-07-12 20:25:08.444915 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.444921 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.444927 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.444933 | orchestrator | 2025-07-12 20:25:08.444939 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:25:08.444945 | orchestrator | Saturday 12 July 2025 20:20:38 +0000 (0:00:00.683) 0:07:21.578 ********* 2025-07-12 20:25:08.444951 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.444957 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.444963 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.444969 | orchestrator | 2025-07-12 20:25:08.444976 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:25:08.444982 | orchestrator | Saturday 12 July 2025 20:20:38 +0000 (0:00:00.704) 0:07:22.282 ********* 2025-07-12 20:25:08.444988 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.444994 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445000 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445006 | orchestrator | 2025-07-12 20:25:08.445012 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:25:08.445018 | orchestrator | Saturday 12 July 2025 20:20:39 +0000 (0:00:00.337) 0:07:22.620 ********* 2025-07-12 20:25:08.445024 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.445030 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445037 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445043 | orchestrator | 2025-07-12 20:25:08.445049 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:25:08.445056 | orchestrator | Saturday 12 July 2025 20:20:39 +0000 (0:00:00.637) 0:07:23.258 ********* 2025-07-12 20:25:08.445062 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.445068 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445074 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445080 | orchestrator | 2025-07-12 20:25:08.445086 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:25:08.445093 | orchestrator | Saturday 12 July 2025 20:20:40 +0000 (0:00:00.329) 0:07:23.587 ********* 2025-07-12 20:25:08.445099 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.445105 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.445111 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.445117 | orchestrator | 2025-07-12 20:25:08.445123 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:25:08.445129 | orchestrator | Saturday 12 July 2025 20:20:40 +0000 (0:00:00.752) 0:07:24.340 ********* 2025-07-12 20:25:08.445136 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.445142 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.445158 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.445164 | orchestrator | 2025-07-12 20:25:08.445170 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:25:08.445177 | orchestrator | Saturday 12 July 2025 20:20:41 +0000 (0:00:00.815) 0:07:25.156 ********* 2025-07-12 20:25:08.445193 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.445199 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445205 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445218 | orchestrator | 2025-07-12 20:25:08.445225 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:25:08.445231 | orchestrator | Saturday 12 July 2025 20:20:42 +0000 (0:00:00.590) 0:07:25.746 ********* 2025-07-12 20:25:08.445237 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.445243 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445249 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445255 | orchestrator | 2025-07-12 20:25:08.445262 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:25:08.445268 | orchestrator | Saturday 12 July 2025 20:20:42 +0000 (0:00:00.313) 0:07:26.060 ********* 2025-07-12 20:25:08.445274 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.445280 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.445286 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.445292 | orchestrator | 2025-07-12 20:25:08.445298 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:25:08.445304 | orchestrator | Saturday 12 July 2025 20:20:42 +0000 (0:00:00.327) 0:07:26.388 ********* 2025-07-12 20:25:08.445310 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.445316 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.445322 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.445328 | orchestrator | 2025-07-12 20:25:08.445335 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:25:08.445375 | orchestrator | Saturday 12 July 2025 20:20:43 +0000 (0:00:00.357) 0:07:26.745 ********* 2025-07-12 20:25:08.445382 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.445388 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.445394 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.445401 | orchestrator | 2025-07-12 20:25:08.445408 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:25:08.445412 | orchestrator | Saturday 12 July 2025 20:20:43 +0000 (0:00:00.653) 0:07:27.399 ********* 2025-07-12 20:25:08.445416 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.445419 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445423 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445427 | orchestrator | 2025-07-12 20:25:08.445430 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:25:08.445434 | orchestrator | Saturday 12 July 2025 20:20:44 +0000 (0:00:00.346) 0:07:27.746 ********* 2025-07-12 20:25:08.445438 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.445442 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445445 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445449 | orchestrator | 2025-07-12 20:25:08.445453 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:25:08.445456 | orchestrator | Saturday 12 July 2025 20:20:44 +0000 (0:00:00.323) 0:07:28.069 ********* 2025-07-12 20:25:08.445469 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.445473 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445476 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445480 | orchestrator | 2025-07-12 20:25:08.445484 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:25:08.445488 | orchestrator | Saturday 12 July 2025 20:20:44 +0000 (0:00:00.336) 0:07:28.405 ********* 2025-07-12 20:25:08.445491 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.445495 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.445498 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.445507 | orchestrator | 2025-07-12 20:25:08.445511 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:25:08.445515 | orchestrator | Saturday 12 July 2025 20:20:45 +0000 (0:00:00.674) 0:07:29.080 ********* 2025-07-12 20:25:08.445518 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.445522 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.445526 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.445529 | orchestrator | 2025-07-12 20:25:08.445533 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-07-12 20:25:08.445537 | orchestrator | Saturday 12 July 2025 20:20:46 +0000 (0:00:00.560) 0:07:29.641 ********* 2025-07-12 20:25:08.445540 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.445544 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.445548 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.445551 | orchestrator | 2025-07-12 20:25:08.445556 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-07-12 20:25:08.445562 | orchestrator | Saturday 12 July 2025 20:20:46 +0000 (0:00:00.302) 0:07:29.943 ********* 2025-07-12 20:25:08.445568 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 20:25:08.445574 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:25:08.445580 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:25:08.445586 | orchestrator | 2025-07-12 20:25:08.445592 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-07-12 20:25:08.445597 | orchestrator | Saturday 12 July 2025 20:20:47 +0000 (0:00:00.958) 0:07:30.901 ********* 2025-07-12 20:25:08.445603 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.445609 | orchestrator | 2025-07-12 20:25:08.445616 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-07-12 20:25:08.445622 | orchestrator | Saturday 12 July 2025 20:20:48 +0000 (0:00:00.805) 0:07:31.707 ********* 2025-07-12 20:25:08.445627 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.445633 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445639 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445646 | orchestrator | 2025-07-12 20:25:08.445652 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-07-12 20:25:08.445658 | orchestrator | Saturday 12 July 2025 20:20:48 +0000 (0:00:00.311) 0:07:32.018 ********* 2025-07-12 20:25:08.445664 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.445670 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445677 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445683 | orchestrator | 2025-07-12 20:25:08.445690 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-07-12 20:25:08.445697 | orchestrator | Saturday 12 July 2025 20:20:48 +0000 (0:00:00.293) 0:07:32.312 ********* 2025-07-12 20:25:08.445704 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.445710 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.445717 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.445724 | orchestrator | 2025-07-12 20:25:08.445731 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-07-12 20:25:08.445739 | orchestrator | Saturday 12 July 2025 20:20:49 +0000 (0:00:00.885) 0:07:33.197 ********* 2025-07-12 20:25:08.445746 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.445753 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.445760 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.445766 | orchestrator | 2025-07-12 20:25:08.445773 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-07-12 20:25:08.445780 | orchestrator | Saturday 12 July 2025 20:20:50 +0000 (0:00:00.357) 0:07:33.554 ********* 2025-07-12 20:25:08.445786 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 20:25:08.445792 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 20:25:08.445804 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-07-12 20:25:08.445811 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 20:25:08.445817 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 20:25:08.445823 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-07-12 20:25:08.445830 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 20:25:08.445834 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 20:25:08.445838 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-07-12 20:25:08.445841 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 20:25:08.445845 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 20:25:08.445849 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-07-12 20:25:08.445853 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 20:25:08.445873 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 20:25:08.445880 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-07-12 20:25:08.445886 | orchestrator | 2025-07-12 20:25:08.445892 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-07-12 20:25:08.445898 | orchestrator | Saturday 12 July 2025 20:20:52 +0000 (0:00:01.884) 0:07:35.439 ********* 2025-07-12 20:25:08.445904 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.445911 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.445917 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.445923 | orchestrator | 2025-07-12 20:25:08.445931 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-07-12 20:25:08.445934 | orchestrator | Saturday 12 July 2025 20:20:52 +0000 (0:00:00.307) 0:07:35.746 ********* 2025-07-12 20:25:08.445938 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.445942 | orchestrator | 2025-07-12 20:25:08.445946 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-07-12 20:25:08.445949 | orchestrator | Saturday 12 July 2025 20:20:53 +0000 (0:00:00.772) 0:07:36.519 ********* 2025-07-12 20:25:08.445954 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 20:25:08.445960 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 20:25:08.445966 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-07-12 20:25:08.445972 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-07-12 20:25:08.445978 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-07-12 20:25:08.445984 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-07-12 20:25:08.445990 | orchestrator | 2025-07-12 20:25:08.445997 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-07-12 20:25:08.446003 | orchestrator | Saturday 12 July 2025 20:20:54 +0000 (0:00:00.925) 0:07:37.445 ********* 2025-07-12 20:25:08.446009 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.446054 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:25:08.446061 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:25:08.446067 | orchestrator | 2025-07-12 20:25:08.446073 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-07-12 20:25:08.446079 | orchestrator | Saturday 12 July 2025 20:20:56 +0000 (0:00:02.035) 0:07:39.481 ********* 2025-07-12 20:25:08.446091 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:25:08.446097 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:25:08.446104 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.446110 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:25:08.446116 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 20:25:08.446122 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.446128 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:25:08.446134 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 20:25:08.446140 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.446146 | orchestrator | 2025-07-12 20:25:08.446152 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-07-12 20:25:08.446158 | orchestrator | Saturday 12 July 2025 20:20:57 +0000 (0:00:01.329) 0:07:40.811 ********* 2025-07-12 20:25:08.446162 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:25:08.446166 | orchestrator | 2025-07-12 20:25:08.446169 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-07-12 20:25:08.446173 | orchestrator | Saturday 12 July 2025 20:20:59 +0000 (0:00:01.917) 0:07:42.728 ********* 2025-07-12 20:25:08.446177 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.446181 | orchestrator | 2025-07-12 20:25:08.446184 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-07-12 20:25:08.446188 | orchestrator | Saturday 12 July 2025 20:20:59 +0000 (0:00:00.534) 0:07:43.263 ********* 2025-07-12 20:25:08.446192 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c2ea885c-c09d-528a-8e30-9d64ecae89b3', 'data_vg': 'ceph-c2ea885c-c09d-528a-8e30-9d64ecae89b3'}) 2025-07-12 20:25:08.446197 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-3d06229f-4e10-52c4-b396-8cb508609dff', 'data_vg': 'ceph-3d06229f-4e10-52c4-b396-8cb508609dff'}) 2025-07-12 20:25:08.446201 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-a733058e-5b74-5553-b3bf-66d1cbf46d31', 'data_vg': 'ceph-a733058e-5b74-5553-b3bf-66d1cbf46d31'}) 2025-07-12 20:25:08.446205 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5037a2b3-768c-53ee-9f72-df4915d4fb6f', 'data_vg': 'ceph-5037a2b3-768c-53ee-9f72-df4915d4fb6f'}) 2025-07-12 20:25:08.446209 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-81820e8a-af8a-5909-b466-981a4bed2414', 'data_vg': 'ceph-81820e8a-af8a-5909-b466-981a4bed2414'}) 2025-07-12 20:25:08.446213 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-8d632655-ba67-5245-89a0-0cb971b00289', 'data_vg': 'ceph-8d632655-ba67-5245-89a0-0cb971b00289'}) 2025-07-12 20:25:08.446216 | orchestrator | 2025-07-12 20:25:08.446220 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-07-12 20:25:08.446224 | orchestrator | Saturday 12 July 2025 20:21:43 +0000 (0:00:43.365) 0:08:26.629 ********* 2025-07-12 20:25:08.446228 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446239 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.446243 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.446247 | orchestrator | 2025-07-12 20:25:08.446251 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-07-12 20:25:08.446255 | orchestrator | Saturday 12 July 2025 20:21:43 +0000 (0:00:00.570) 0:08:27.200 ********* 2025-07-12 20:25:08.446258 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.446262 | orchestrator | 2025-07-12 20:25:08.446266 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-07-12 20:25:08.446270 | orchestrator | Saturday 12 July 2025 20:21:44 +0000 (0:00:00.554) 0:08:27.754 ********* 2025-07-12 20:25:08.446273 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.446277 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.446281 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.446289 | orchestrator | 2025-07-12 20:25:08.446293 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-07-12 20:25:08.446297 | orchestrator | Saturday 12 July 2025 20:21:45 +0000 (0:00:00.690) 0:08:28.444 ********* 2025-07-12 20:25:08.446300 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.446304 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.446308 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.446311 | orchestrator | 2025-07-12 20:25:08.446315 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-07-12 20:25:08.446319 | orchestrator | Saturday 12 July 2025 20:21:47 +0000 (0:00:02.888) 0:08:31.333 ********* 2025-07-12 20:25:08.446322 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.446326 | orchestrator | 2025-07-12 20:25:08.446330 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-07-12 20:25:08.446334 | orchestrator | Saturday 12 July 2025 20:21:48 +0000 (0:00:00.552) 0:08:31.885 ********* 2025-07-12 20:25:08.446337 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.446357 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.446364 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.446370 | orchestrator | 2025-07-12 20:25:08.446376 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-07-12 20:25:08.446383 | orchestrator | Saturday 12 July 2025 20:21:49 +0000 (0:00:01.138) 0:08:33.024 ********* 2025-07-12 20:25:08.446388 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.446392 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.446395 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.446399 | orchestrator | 2025-07-12 20:25:08.446403 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-07-12 20:25:08.446407 | orchestrator | Saturday 12 July 2025 20:21:51 +0000 (0:00:01.445) 0:08:34.469 ********* 2025-07-12 20:25:08.446410 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.446414 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.446418 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.446421 | orchestrator | 2025-07-12 20:25:08.446425 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-07-12 20:25:08.446429 | orchestrator | Saturday 12 July 2025 20:21:52 +0000 (0:00:01.631) 0:08:36.101 ********* 2025-07-12 20:25:08.446433 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446436 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.446440 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.446444 | orchestrator | 2025-07-12 20:25:08.446447 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-07-12 20:25:08.446451 | orchestrator | Saturday 12 July 2025 20:21:53 +0000 (0:00:00.353) 0:08:36.455 ********* 2025-07-12 20:25:08.446455 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446459 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.446462 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.446466 | orchestrator | 2025-07-12 20:25:08.446470 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-07-12 20:25:08.446476 | orchestrator | Saturday 12 July 2025 20:21:53 +0000 (0:00:00.367) 0:08:36.822 ********* 2025-07-12 20:25:08.446482 | orchestrator | ok: [testbed-node-3] => (item=5) 2025-07-12 20:25:08.446487 | orchestrator | ok: [testbed-node-4] => (item=3) 2025-07-12 20:25:08.446493 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 20:25:08.446499 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-07-12 20:25:08.446505 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-07-12 20:25:08.446510 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-07-12 20:25:08.446516 | orchestrator | 2025-07-12 20:25:08.446521 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-07-12 20:25:08.446526 | orchestrator | Saturday 12 July 2025 20:21:54 +0000 (0:00:01.333) 0:08:38.156 ********* 2025-07-12 20:25:08.446532 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-07-12 20:25:08.446544 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-07-12 20:25:08.446550 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-07-12 20:25:08.446555 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-07-12 20:25:08.446561 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-12 20:25:08.446565 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-07-12 20:25:08.446569 | orchestrator | 2025-07-12 20:25:08.446572 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-07-12 20:25:08.446576 | orchestrator | Saturday 12 July 2025 20:21:56 +0000 (0:00:02.070) 0:08:40.226 ********* 2025-07-12 20:25:08.446580 | orchestrator | changed: [testbed-node-4] => (item=3) 2025-07-12 20:25:08.446584 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-07-12 20:25:08.446587 | orchestrator | changed: [testbed-node-3] => (item=5) 2025-07-12 20:25:08.446591 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-07-12 20:25:08.446595 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-07-12 20:25:08.446598 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-07-12 20:25:08.446602 | orchestrator | 2025-07-12 20:25:08.446606 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-07-12 20:25:08.446610 | orchestrator | Saturday 12 July 2025 20:22:01 +0000 (0:00:04.229) 0:08:44.455 ********* 2025-07-12 20:25:08.446613 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446620 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.446628 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:25:08.446632 | orchestrator | 2025-07-12 20:25:08.446636 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-07-12 20:25:08.446639 | orchestrator | Saturday 12 July 2025 20:22:03 +0000 (0:00:02.239) 0:08:46.694 ********* 2025-07-12 20:25:08.446643 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446647 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.446651 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-07-12 20:25:08.446654 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:25:08.446658 | orchestrator | 2025-07-12 20:25:08.446662 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-07-12 20:25:08.446665 | orchestrator | Saturday 12 July 2025 20:22:16 +0000 (0:00:12.793) 0:08:59.488 ********* 2025-07-12 20:25:08.446669 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446673 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.446676 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.446680 | orchestrator | 2025-07-12 20:25:08.446684 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:25:08.446688 | orchestrator | Saturday 12 July 2025 20:22:16 +0000 (0:00:00.860) 0:09:00.349 ********* 2025-07-12 20:25:08.446691 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446695 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.446699 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.446702 | orchestrator | 2025-07-12 20:25:08.446706 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-07-12 20:25:08.446710 | orchestrator | Saturday 12 July 2025 20:22:17 +0000 (0:00:00.650) 0:09:00.999 ********* 2025-07-12 20:25:08.446713 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.446717 | orchestrator | 2025-07-12 20:25:08.446721 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-07-12 20:25:08.446725 | orchestrator | Saturday 12 July 2025 20:22:18 +0000 (0:00:00.594) 0:09:01.594 ********* 2025-07-12 20:25:08.446728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.446732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.446736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.446739 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446746 | orchestrator | 2025-07-12 20:25:08.446750 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-07-12 20:25:08.446754 | orchestrator | Saturday 12 July 2025 20:22:18 +0000 (0:00:00.390) 0:09:01.985 ********* 2025-07-12 20:25:08.446757 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446761 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.446765 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.446769 | orchestrator | 2025-07-12 20:25:08.446772 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-07-12 20:25:08.446776 | orchestrator | Saturday 12 July 2025 20:22:19 +0000 (0:00:00.442) 0:09:02.428 ********* 2025-07-12 20:25:08.446780 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446783 | orchestrator | 2025-07-12 20:25:08.446787 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-07-12 20:25:08.446791 | orchestrator | Saturday 12 July 2025 20:22:19 +0000 (0:00:00.230) 0:09:02.658 ********* 2025-07-12 20:25:08.446795 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446798 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.446802 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.446806 | orchestrator | 2025-07-12 20:25:08.446809 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-07-12 20:25:08.446813 | orchestrator | Saturday 12 July 2025 20:22:19 +0000 (0:00:00.626) 0:09:03.285 ********* 2025-07-12 20:25:08.446817 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446821 | orchestrator | 2025-07-12 20:25:08.446824 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-07-12 20:25:08.446828 | orchestrator | Saturday 12 July 2025 20:22:20 +0000 (0:00:00.273) 0:09:03.558 ********* 2025-07-12 20:25:08.446832 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446835 | orchestrator | 2025-07-12 20:25:08.446839 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-07-12 20:25:08.446843 | orchestrator | Saturday 12 July 2025 20:22:20 +0000 (0:00:00.280) 0:09:03.839 ********* 2025-07-12 20:25:08.446847 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446850 | orchestrator | 2025-07-12 20:25:08.446854 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-07-12 20:25:08.446858 | orchestrator | Saturday 12 July 2025 20:22:20 +0000 (0:00:00.125) 0:09:03.965 ********* 2025-07-12 20:25:08.446861 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446865 | orchestrator | 2025-07-12 20:25:08.446869 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-07-12 20:25:08.446873 | orchestrator | Saturday 12 July 2025 20:22:20 +0000 (0:00:00.218) 0:09:04.184 ********* 2025-07-12 20:25:08.446876 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446880 | orchestrator | 2025-07-12 20:25:08.446884 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-07-12 20:25:08.446887 | orchestrator | Saturday 12 July 2025 20:22:21 +0000 (0:00:00.230) 0:09:04.414 ********* 2025-07-12 20:25:08.446891 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.446895 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.446899 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.446902 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446906 | orchestrator | 2025-07-12 20:25:08.446910 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-07-12 20:25:08.446913 | orchestrator | Saturday 12 July 2025 20:22:21 +0000 (0:00:00.382) 0:09:04.797 ********* 2025-07-12 20:25:08.446917 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446923 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.446930 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.446934 | orchestrator | 2025-07-12 20:25:08.446937 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-07-12 20:25:08.446941 | orchestrator | Saturday 12 July 2025 20:22:21 +0000 (0:00:00.309) 0:09:05.106 ********* 2025-07-12 20:25:08.446948 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446952 | orchestrator | 2025-07-12 20:25:08.446955 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-07-12 20:25:08.446959 | orchestrator | Saturday 12 July 2025 20:22:22 +0000 (0:00:00.888) 0:09:05.995 ********* 2025-07-12 20:25:08.446963 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.446966 | orchestrator | 2025-07-12 20:25:08.446970 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-07-12 20:25:08.446974 | orchestrator | 2025-07-12 20:25:08.446977 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:25:08.446981 | orchestrator | Saturday 12 July 2025 20:22:23 +0000 (0:00:00.738) 0:09:06.734 ********* 2025-07-12 20:25:08.446985 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.446990 | orchestrator | 2025-07-12 20:25:08.446993 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:25:08.446997 | orchestrator | Saturday 12 July 2025 20:22:24 +0000 (0:00:01.326) 0:09:08.061 ********* 2025-07-12 20:25:08.447001 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.447005 | orchestrator | 2025-07-12 20:25:08.447008 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:25:08.447012 | orchestrator | Saturday 12 July 2025 20:22:26 +0000 (0:00:01.390) 0:09:09.451 ********* 2025-07-12 20:25:08.447016 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.447019 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.447023 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.447027 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.447030 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.447034 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.447038 | orchestrator | 2025-07-12 20:25:08.447041 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:25:08.447045 | orchestrator | Saturday 12 July 2025 20:22:27 +0000 (0:00:01.057) 0:09:10.509 ********* 2025-07-12 20:25:08.447049 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447052 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447056 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447060 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.447064 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.447067 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.447071 | orchestrator | 2025-07-12 20:25:08.447075 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:25:08.447078 | orchestrator | Saturday 12 July 2025 20:22:28 +0000 (0:00:00.997) 0:09:11.506 ********* 2025-07-12 20:25:08.447082 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447087 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447093 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447099 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.447105 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.447111 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.447117 | orchestrator | 2025-07-12 20:25:08.447122 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:25:08.447129 | orchestrator | Saturday 12 July 2025 20:22:29 +0000 (0:00:01.333) 0:09:12.840 ********* 2025-07-12 20:25:08.447135 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447140 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447146 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447151 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.447157 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.447163 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.447169 | orchestrator | 2025-07-12 20:25:08.447176 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:25:08.447187 | orchestrator | Saturday 12 July 2025 20:22:30 +0000 (0:00:01.006) 0:09:13.846 ********* 2025-07-12 20:25:08.447193 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.447199 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.447206 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.447212 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.447218 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.447224 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.447230 | orchestrator | 2025-07-12 20:25:08.447236 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:25:08.447242 | orchestrator | Saturday 12 July 2025 20:22:31 +0000 (0:00:00.902) 0:09:14.749 ********* 2025-07-12 20:25:08.447248 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447254 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447261 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447267 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.447273 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.447279 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.447284 | orchestrator | 2025-07-12 20:25:08.447290 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:25:08.447296 | orchestrator | Saturday 12 July 2025 20:22:32 +0000 (0:00:00.776) 0:09:15.525 ********* 2025-07-12 20:25:08.447302 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447308 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447314 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447320 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.447326 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.447332 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.447338 | orchestrator | 2025-07-12 20:25:08.447383 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:25:08.447389 | orchestrator | Saturday 12 July 2025 20:22:32 +0000 (0:00:00.875) 0:09:16.401 ********* 2025-07-12 20:25:08.447396 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.447406 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.447420 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.447427 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.447434 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.447440 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.447447 | orchestrator | 2025-07-12 20:25:08.447453 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:25:08.447460 | orchestrator | Saturday 12 July 2025 20:22:34 +0000 (0:00:01.127) 0:09:17.528 ********* 2025-07-12 20:25:08.447466 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.447472 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.447478 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.447483 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.447490 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.447496 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.447502 | orchestrator | 2025-07-12 20:25:08.447509 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:25:08.447515 | orchestrator | Saturday 12 July 2025 20:22:35 +0000 (0:00:01.351) 0:09:18.880 ********* 2025-07-12 20:25:08.447521 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447528 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447532 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447537 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.447544 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.447550 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.447555 | orchestrator | 2025-07-12 20:25:08.447561 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:25:08.447568 | orchestrator | Saturday 12 July 2025 20:22:36 +0000 (0:00:00.611) 0:09:19.491 ********* 2025-07-12 20:25:08.447573 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.447580 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.447596 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.447603 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.447609 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.447615 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.447621 | orchestrator | 2025-07-12 20:25:08.447625 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:25:08.447629 | orchestrator | Saturday 12 July 2025 20:22:36 +0000 (0:00:00.899) 0:09:20.391 ********* 2025-07-12 20:25:08.447634 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447640 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447646 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447652 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.447658 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.447664 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.447670 | orchestrator | 2025-07-12 20:25:08.447677 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:25:08.447683 | orchestrator | Saturday 12 July 2025 20:22:37 +0000 (0:00:00.697) 0:09:21.088 ********* 2025-07-12 20:25:08.447689 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447696 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447702 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447708 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.447714 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.447721 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.447726 | orchestrator | 2025-07-12 20:25:08.447733 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:25:08.447739 | orchestrator | Saturday 12 July 2025 20:22:38 +0000 (0:00:00.911) 0:09:21.999 ********* 2025-07-12 20:25:08.447745 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447751 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447757 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447763 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.447770 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.447776 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.447782 | orchestrator | 2025-07-12 20:25:08.447788 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:25:08.447794 | orchestrator | Saturday 12 July 2025 20:22:39 +0000 (0:00:00.660) 0:09:22.660 ********* 2025-07-12 20:25:08.447800 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447806 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447812 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447818 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.447824 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.447830 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.447837 | orchestrator | 2025-07-12 20:25:08.447843 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:25:08.447849 | orchestrator | Saturday 12 July 2025 20:22:40 +0000 (0:00:00.873) 0:09:23.533 ********* 2025-07-12 20:25:08.447856 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:25:08.447862 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:25:08.447868 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:25:08.447874 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.447880 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.447886 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.447892 | orchestrator | 2025-07-12 20:25:08.447898 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:25:08.447904 | orchestrator | Saturday 12 July 2025 20:22:40 +0000 (0:00:00.674) 0:09:24.207 ********* 2025-07-12 20:25:08.447910 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.447917 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.447922 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.447928 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.447935 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.447941 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.447954 | orchestrator | 2025-07-12 20:25:08.447960 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:25:08.447966 | orchestrator | Saturday 12 July 2025 20:22:41 +0000 (0:00:00.868) 0:09:25.076 ********* 2025-07-12 20:25:08.447973 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.447979 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.447985 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.447991 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.447997 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.448003 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.448009 | orchestrator | 2025-07-12 20:25:08.448015 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:25:08.448021 | orchestrator | Saturday 12 July 2025 20:22:42 +0000 (0:00:00.653) 0:09:25.730 ********* 2025-07-12 20:25:08.448027 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.448038 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.448050 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.448057 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.448063 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.448069 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.448075 | orchestrator | 2025-07-12 20:25:08.448081 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-07-12 20:25:08.448087 | orchestrator | Saturday 12 July 2025 20:22:43 +0000 (0:00:01.384) 0:09:27.114 ********* 2025-07-12 20:25:08.448094 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.448100 | orchestrator | 2025-07-12 20:25:08.448105 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-07-12 20:25:08.448111 | orchestrator | Saturday 12 July 2025 20:22:47 +0000 (0:00:03.929) 0:09:31.044 ********* 2025-07-12 20:25:08.448118 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.448124 | orchestrator | 2025-07-12 20:25:08.448131 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-07-12 20:25:08.448174 | orchestrator | Saturday 12 July 2025 20:22:49 +0000 (0:00:01.941) 0:09:32.985 ********* 2025-07-12 20:25:08.448180 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.448186 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.448192 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.448199 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.448204 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.448211 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.448217 | orchestrator | 2025-07-12 20:25:08.448223 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-07-12 20:25:08.448229 | orchestrator | Saturday 12 July 2025 20:22:51 +0000 (0:00:01.810) 0:09:34.795 ********* 2025-07-12 20:25:08.448235 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.448241 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.448248 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.448254 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.448260 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.448266 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.448272 | orchestrator | 2025-07-12 20:25:08.448278 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-07-12 20:25:08.448284 | orchestrator | Saturday 12 July 2025 20:22:52 +0000 (0:00:00.951) 0:09:35.746 ********* 2025-07-12 20:25:08.448291 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.448299 | orchestrator | 2025-07-12 20:25:08.448305 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-07-12 20:25:08.448312 | orchestrator | Saturday 12 July 2025 20:22:53 +0000 (0:00:01.345) 0:09:37.092 ********* 2025-07-12 20:25:08.448318 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.448324 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.448329 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.448336 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.448360 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.448367 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.448374 | orchestrator | 2025-07-12 20:25:08.448381 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-07-12 20:25:08.448387 | orchestrator | Saturday 12 July 2025 20:22:55 +0000 (0:00:01.699) 0:09:38.791 ********* 2025-07-12 20:25:08.448394 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.448401 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.448407 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.448413 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.448420 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.448427 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.448434 | orchestrator | 2025-07-12 20:25:08.448441 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-07-12 20:25:08.448448 | orchestrator | Saturday 12 July 2025 20:22:58 +0000 (0:00:03.247) 0:09:42.038 ********* 2025-07-12 20:25:08.448455 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.448461 | orchestrator | 2025-07-12 20:25:08.448468 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-07-12 20:25:08.448475 | orchestrator | Saturday 12 July 2025 20:23:00 +0000 (0:00:01.383) 0:09:43.422 ********* 2025-07-12 20:25:08.448482 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.448488 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.448495 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.448501 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.448508 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.448515 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.448522 | orchestrator | 2025-07-12 20:25:08.448529 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-07-12 20:25:08.448535 | orchestrator | Saturday 12 July 2025 20:23:00 +0000 (0:00:00.940) 0:09:44.362 ********* 2025-07-12 20:25:08.448542 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:25:08.448548 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:25:08.448555 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:25:08.448561 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.448569 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.448575 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.448582 | orchestrator | 2025-07-12 20:25:08.448589 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-07-12 20:25:08.448595 | orchestrator | Saturday 12 July 2025 20:23:03 +0000 (0:00:02.400) 0:09:46.763 ********* 2025-07-12 20:25:08.448602 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:25:08.448609 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:25:08.448616 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:25:08.448622 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.448629 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.448635 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.448642 | orchestrator | 2025-07-12 20:25:08.448649 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-07-12 20:25:08.448656 | orchestrator | 2025-07-12 20:25:08.448663 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:25:08.448670 | orchestrator | Saturday 12 July 2025 20:23:04 +0000 (0:00:01.288) 0:09:48.052 ********* 2025-07-12 20:25:08.448686 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.448693 | orchestrator | 2025-07-12 20:25:08.448700 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:25:08.448707 | orchestrator | Saturday 12 July 2025 20:23:05 +0000 (0:00:00.582) 0:09:48.634 ********* 2025-07-12 20:25:08.448713 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.448724 | orchestrator | 2025-07-12 20:25:08.448731 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:25:08.448738 | orchestrator | Saturday 12 July 2025 20:23:06 +0000 (0:00:00.793) 0:09:49.428 ********* 2025-07-12 20:25:08.448745 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.448752 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.448759 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.448766 | orchestrator | 2025-07-12 20:25:08.448772 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:25:08.448779 | orchestrator | Saturday 12 July 2025 20:23:06 +0000 (0:00:00.344) 0:09:49.772 ********* 2025-07-12 20:25:08.448786 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.448792 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.448799 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.448805 | orchestrator | 2025-07-12 20:25:08.448812 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:25:08.448818 | orchestrator | Saturday 12 July 2025 20:23:07 +0000 (0:00:00.690) 0:09:50.463 ********* 2025-07-12 20:25:08.448825 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.448832 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.448839 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.448846 | orchestrator | 2025-07-12 20:25:08.448852 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:25:08.448859 | orchestrator | Saturday 12 July 2025 20:23:08 +0000 (0:00:01.021) 0:09:51.485 ********* 2025-07-12 20:25:08.448866 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.448873 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.448879 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.448886 | orchestrator | 2025-07-12 20:25:08.448892 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:25:08.448899 | orchestrator | Saturday 12 July 2025 20:23:08 +0000 (0:00:00.782) 0:09:52.267 ********* 2025-07-12 20:25:08.448906 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.448913 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.448919 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.448926 | orchestrator | 2025-07-12 20:25:08.448933 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:25:08.448940 | orchestrator | Saturday 12 July 2025 20:23:09 +0000 (0:00:00.342) 0:09:52.610 ********* 2025-07-12 20:25:08.448946 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.448953 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.448960 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.448966 | orchestrator | 2025-07-12 20:25:08.448973 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:25:08.448979 | orchestrator | Saturday 12 July 2025 20:23:09 +0000 (0:00:00.319) 0:09:52.929 ********* 2025-07-12 20:25:08.448986 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.448993 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.449000 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.449007 | orchestrator | 2025-07-12 20:25:08.449013 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:25:08.449020 | orchestrator | Saturday 12 July 2025 20:23:10 +0000 (0:00:00.689) 0:09:53.619 ********* 2025-07-12 20:25:08.449026 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.449033 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.449040 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.449047 | orchestrator | 2025-07-12 20:25:08.449054 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:25:08.449060 | orchestrator | Saturday 12 July 2025 20:23:11 +0000 (0:00:00.835) 0:09:54.454 ********* 2025-07-12 20:25:08.449067 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.449074 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.449081 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.449088 | orchestrator | 2025-07-12 20:25:08.449095 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:25:08.449107 | orchestrator | Saturday 12 July 2025 20:23:11 +0000 (0:00:00.800) 0:09:55.255 ********* 2025-07-12 20:25:08.449114 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.449121 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.449128 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.449134 | orchestrator | 2025-07-12 20:25:08.449141 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:25:08.449147 | orchestrator | Saturday 12 July 2025 20:23:12 +0000 (0:00:00.310) 0:09:55.565 ********* 2025-07-12 20:25:08.449154 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.449160 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.449167 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.449173 | orchestrator | 2025-07-12 20:25:08.449181 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:25:08.449187 | orchestrator | Saturday 12 July 2025 20:23:12 +0000 (0:00:00.600) 0:09:56.166 ********* 2025-07-12 20:25:08.449194 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.449201 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.449208 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.449214 | orchestrator | 2025-07-12 20:25:08.449221 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:25:08.449227 | orchestrator | Saturday 12 July 2025 20:23:13 +0000 (0:00:00.378) 0:09:56.544 ********* 2025-07-12 20:25:08.449234 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.449241 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.449248 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.449255 | orchestrator | 2025-07-12 20:25:08.449262 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:25:08.449268 | orchestrator | Saturday 12 July 2025 20:23:13 +0000 (0:00:00.342) 0:09:56.887 ********* 2025-07-12 20:25:08.449275 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.449285 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.449296 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.449303 | orchestrator | 2025-07-12 20:25:08.449309 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:25:08.449315 | orchestrator | Saturday 12 July 2025 20:23:13 +0000 (0:00:00.375) 0:09:57.262 ********* 2025-07-12 20:25:08.449322 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.449328 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.449335 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.449356 | orchestrator | 2025-07-12 20:25:08.449362 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:25:08.449369 | orchestrator | Saturday 12 July 2025 20:23:14 +0000 (0:00:00.683) 0:09:57.946 ********* 2025-07-12 20:25:08.449375 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.449382 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.449389 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.449394 | orchestrator | 2025-07-12 20:25:08.449400 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:25:08.449406 | orchestrator | Saturday 12 July 2025 20:23:14 +0000 (0:00:00.343) 0:09:58.289 ********* 2025-07-12 20:25:08.449412 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.449418 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.449424 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.449430 | orchestrator | 2025-07-12 20:25:08.449436 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:25:08.449442 | orchestrator | Saturday 12 July 2025 20:23:15 +0000 (0:00:00.366) 0:09:58.656 ********* 2025-07-12 20:25:08.449448 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.449454 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.449460 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.449466 | orchestrator | 2025-07-12 20:25:08.449473 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:25:08.449479 | orchestrator | Saturday 12 July 2025 20:23:15 +0000 (0:00:00.357) 0:09:59.014 ********* 2025-07-12 20:25:08.449491 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.449497 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.449503 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.449509 | orchestrator | 2025-07-12 20:25:08.449515 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-07-12 20:25:08.449522 | orchestrator | Saturday 12 July 2025 20:23:16 +0000 (0:00:00.891) 0:09:59.906 ********* 2025-07-12 20:25:08.449528 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.449534 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.449540 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-07-12 20:25:08.449547 | orchestrator | 2025-07-12 20:25:08.449553 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-07-12 20:25:08.449559 | orchestrator | Saturday 12 July 2025 20:23:16 +0000 (0:00:00.409) 0:10:00.315 ********* 2025-07-12 20:25:08.449565 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:25:08.449572 | orchestrator | 2025-07-12 20:25:08.449578 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-07-12 20:25:08.449584 | orchestrator | Saturday 12 July 2025 20:23:19 +0000 (0:00:02.099) 0:10:02.414 ********* 2025-07-12 20:25:08.449592 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-07-12 20:25:08.449600 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.449607 | orchestrator | 2025-07-12 20:25:08.449614 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-07-12 20:25:08.449620 | orchestrator | Saturday 12 July 2025 20:23:19 +0000 (0:00:00.207) 0:10:02.621 ********* 2025-07-12 20:25:08.449629 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:25:08.449643 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:25:08.449649 | orchestrator | 2025-07-12 20:25:08.449656 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-07-12 20:25:08.449662 | orchestrator | Saturday 12 July 2025 20:23:27 +0000 (0:00:07.982) 0:10:10.603 ********* 2025-07-12 20:25:08.449668 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:25:08.449674 | orchestrator | 2025-07-12 20:25:08.449680 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-07-12 20:25:08.449686 | orchestrator | Saturday 12 July 2025 20:23:30 +0000 (0:00:03.706) 0:10:14.310 ********* 2025-07-12 20:25:08.449692 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.449699 | orchestrator | 2025-07-12 20:25:08.449705 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-07-12 20:25:08.449711 | orchestrator | Saturday 12 July 2025 20:23:31 +0000 (0:00:00.579) 0:10:14.890 ********* 2025-07-12 20:25:08.449718 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 20:25:08.449724 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 20:25:08.449730 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-07-12 20:25:08.449736 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-07-12 20:25:08.449750 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-07-12 20:25:08.449757 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-07-12 20:25:08.449767 | orchestrator | 2025-07-12 20:25:08.449774 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-07-12 20:25:08.449780 | orchestrator | Saturday 12 July 2025 20:23:32 +0000 (0:00:01.062) 0:10:15.953 ********* 2025-07-12 20:25:08.449786 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.449793 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:25:08.449799 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:25:08.449805 | orchestrator | 2025-07-12 20:25:08.449811 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-07-12 20:25:08.449817 | orchestrator | Saturday 12 July 2025 20:23:35 +0000 (0:00:02.604) 0:10:18.557 ********* 2025-07-12 20:25:08.449824 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:25:08.449830 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:25:08.449836 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.449842 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:25:08.449849 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 20:25:08.449855 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.449861 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:25:08.449867 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 20:25:08.449874 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.449880 | orchestrator | 2025-07-12 20:25:08.449886 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-07-12 20:25:08.449892 | orchestrator | Saturday 12 July 2025 20:23:36 +0000 (0:00:01.773) 0:10:20.331 ********* 2025-07-12 20:25:08.449898 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.449904 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.449911 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.449917 | orchestrator | 2025-07-12 20:25:08.449923 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-07-12 20:25:08.449929 | orchestrator | Saturday 12 July 2025 20:23:39 +0000 (0:00:02.779) 0:10:23.110 ********* 2025-07-12 20:25:08.449936 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.449942 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.449948 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.449954 | orchestrator | 2025-07-12 20:25:08.449960 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-07-12 20:25:08.449967 | orchestrator | Saturday 12 July 2025 20:23:40 +0000 (0:00:00.315) 0:10:23.426 ********* 2025-07-12 20:25:08.449973 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.449979 | orchestrator | 2025-07-12 20:25:08.449986 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-07-12 20:25:08.449992 | orchestrator | Saturday 12 July 2025 20:23:40 +0000 (0:00:00.803) 0:10:24.230 ********* 2025-07-12 20:25:08.449998 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.450005 | orchestrator | 2025-07-12 20:25:08.450009 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-07-12 20:25:08.450032 | orchestrator | Saturday 12 July 2025 20:23:41 +0000 (0:00:00.706) 0:10:24.937 ********* 2025-07-12 20:25:08.450037 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.450040 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.450044 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.450048 | orchestrator | 2025-07-12 20:25:08.450051 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-07-12 20:25:08.450055 | orchestrator | Saturday 12 July 2025 20:23:42 +0000 (0:00:01.323) 0:10:26.260 ********* 2025-07-12 20:25:08.450059 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.450062 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.450066 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.450073 | orchestrator | 2025-07-12 20:25:08.450077 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-07-12 20:25:08.450081 | orchestrator | Saturday 12 July 2025 20:23:44 +0000 (0:00:01.505) 0:10:27.765 ********* 2025-07-12 20:25:08.450084 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.450088 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.450092 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.450095 | orchestrator | 2025-07-12 20:25:08.450099 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-07-12 20:25:08.450103 | orchestrator | Saturday 12 July 2025 20:23:46 +0000 (0:00:01.760) 0:10:29.525 ********* 2025-07-12 20:25:08.450106 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.450110 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.450114 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.450117 | orchestrator | 2025-07-12 20:25:08.450121 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-07-12 20:25:08.450125 | orchestrator | Saturday 12 July 2025 20:23:48 +0000 (0:00:01.887) 0:10:31.413 ********* 2025-07-12 20:25:08.450128 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450132 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450136 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450139 | orchestrator | 2025-07-12 20:25:08.450143 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:25:08.450147 | orchestrator | Saturday 12 July 2025 20:23:49 +0000 (0:00:01.513) 0:10:32.927 ********* 2025-07-12 20:25:08.450150 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.450154 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.450158 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.450161 | orchestrator | 2025-07-12 20:25:08.450165 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-07-12 20:25:08.450169 | orchestrator | Saturday 12 July 2025 20:23:50 +0000 (0:00:00.685) 0:10:33.612 ********* 2025-07-12 20:25:08.450179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.450183 | orchestrator | 2025-07-12 20:25:08.450187 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-07-12 20:25:08.450191 | orchestrator | Saturday 12 July 2025 20:23:51 +0000 (0:00:00.794) 0:10:34.407 ********* 2025-07-12 20:25:08.450194 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450198 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450201 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450205 | orchestrator | 2025-07-12 20:25:08.450209 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-07-12 20:25:08.450213 | orchestrator | Saturday 12 July 2025 20:23:51 +0000 (0:00:00.342) 0:10:34.749 ********* 2025-07-12 20:25:08.450216 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.450220 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.450224 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.450227 | orchestrator | 2025-07-12 20:25:08.450231 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-07-12 20:25:08.450235 | orchestrator | Saturday 12 July 2025 20:23:52 +0000 (0:00:01.216) 0:10:35.966 ********* 2025-07-12 20:25:08.450238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.450242 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.450246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.450249 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450253 | orchestrator | 2025-07-12 20:25:08.450256 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-07-12 20:25:08.450260 | orchestrator | Saturday 12 July 2025 20:23:53 +0000 (0:00:00.904) 0:10:36.871 ********* 2025-07-12 20:25:08.450264 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450268 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450271 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450278 | orchestrator | 2025-07-12 20:25:08.450282 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-12 20:25:08.450286 | orchestrator | 2025-07-12 20:25:08.450289 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-07-12 20:25:08.450293 | orchestrator | Saturday 12 July 2025 20:23:54 +0000 (0:00:00.825) 0:10:37.696 ********* 2025-07-12 20:25:08.450297 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.450301 | orchestrator | 2025-07-12 20:25:08.450304 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-07-12 20:25:08.450308 | orchestrator | Saturday 12 July 2025 20:23:54 +0000 (0:00:00.495) 0:10:38.191 ********* 2025-07-12 20:25:08.450312 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.450315 | orchestrator | 2025-07-12 20:25:08.450319 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-07-12 20:25:08.450323 | orchestrator | Saturday 12 July 2025 20:23:55 +0000 (0:00:00.772) 0:10:38.963 ********* 2025-07-12 20:25:08.450326 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450330 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.450334 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.450337 | orchestrator | 2025-07-12 20:25:08.450377 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-07-12 20:25:08.450382 | orchestrator | Saturday 12 July 2025 20:23:55 +0000 (0:00:00.346) 0:10:39.310 ********* 2025-07-12 20:25:08.450386 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450389 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450393 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450397 | orchestrator | 2025-07-12 20:25:08.450400 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-07-12 20:25:08.450404 | orchestrator | Saturday 12 July 2025 20:23:56 +0000 (0:00:00.731) 0:10:40.042 ********* 2025-07-12 20:25:08.450408 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450412 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450415 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450419 | orchestrator | 2025-07-12 20:25:08.450423 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-07-12 20:25:08.450426 | orchestrator | Saturday 12 July 2025 20:23:57 +0000 (0:00:00.972) 0:10:41.015 ********* 2025-07-12 20:25:08.450430 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450434 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450437 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450441 | orchestrator | 2025-07-12 20:25:08.450445 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-07-12 20:25:08.450449 | orchestrator | Saturday 12 July 2025 20:23:58 +0000 (0:00:00.731) 0:10:41.746 ********* 2025-07-12 20:25:08.450452 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450456 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.450460 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.450463 | orchestrator | 2025-07-12 20:25:08.450467 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-07-12 20:25:08.450471 | orchestrator | Saturday 12 July 2025 20:23:58 +0000 (0:00:00.335) 0:10:42.081 ********* 2025-07-12 20:25:08.450474 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450478 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.450482 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.450485 | orchestrator | 2025-07-12 20:25:08.450489 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-07-12 20:25:08.450493 | orchestrator | Saturday 12 July 2025 20:23:58 +0000 (0:00:00.305) 0:10:42.387 ********* 2025-07-12 20:25:08.450496 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450500 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.450504 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.450507 | orchestrator | 2025-07-12 20:25:08.450514 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-07-12 20:25:08.450518 | orchestrator | Saturday 12 July 2025 20:23:59 +0000 (0:00:00.597) 0:10:42.984 ********* 2025-07-12 20:25:08.450522 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450526 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450530 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450533 | orchestrator | 2025-07-12 20:25:08.450546 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-07-12 20:25:08.450550 | orchestrator | Saturday 12 July 2025 20:24:00 +0000 (0:00:00.736) 0:10:43.720 ********* 2025-07-12 20:25:08.450553 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450557 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450561 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450564 | orchestrator | 2025-07-12 20:25:08.450568 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-07-12 20:25:08.450572 | orchestrator | Saturday 12 July 2025 20:24:01 +0000 (0:00:00.772) 0:10:44.493 ********* 2025-07-12 20:25:08.450576 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450579 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.450583 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.450587 | orchestrator | 2025-07-12 20:25:08.450590 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-07-12 20:25:08.450594 | orchestrator | Saturday 12 July 2025 20:24:01 +0000 (0:00:00.300) 0:10:44.793 ********* 2025-07-12 20:25:08.450598 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450601 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.450605 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.450609 | orchestrator | 2025-07-12 20:25:08.450612 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-07-12 20:25:08.450616 | orchestrator | Saturday 12 July 2025 20:24:01 +0000 (0:00:00.299) 0:10:45.093 ********* 2025-07-12 20:25:08.450620 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450623 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450627 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450631 | orchestrator | 2025-07-12 20:25:08.450634 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-07-12 20:25:08.450638 | orchestrator | Saturday 12 July 2025 20:24:02 +0000 (0:00:00.616) 0:10:45.710 ********* 2025-07-12 20:25:08.450642 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450646 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450649 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450653 | orchestrator | 2025-07-12 20:25:08.450657 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-07-12 20:25:08.450660 | orchestrator | Saturday 12 July 2025 20:24:02 +0000 (0:00:00.332) 0:10:46.043 ********* 2025-07-12 20:25:08.450664 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450668 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450671 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450675 | orchestrator | 2025-07-12 20:25:08.450679 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-07-12 20:25:08.450682 | orchestrator | Saturday 12 July 2025 20:24:02 +0000 (0:00:00.319) 0:10:46.362 ********* 2025-07-12 20:25:08.450686 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450690 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.450694 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.450697 | orchestrator | 2025-07-12 20:25:08.450701 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-07-12 20:25:08.450705 | orchestrator | Saturday 12 July 2025 20:24:03 +0000 (0:00:00.342) 0:10:46.705 ********* 2025-07-12 20:25:08.450708 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450732 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.450736 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.450740 | orchestrator | 2025-07-12 20:25:08.450744 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-07-12 20:25:08.450751 | orchestrator | Saturday 12 July 2025 20:24:03 +0000 (0:00:00.602) 0:10:47.307 ********* 2025-07-12 20:25:08.450755 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450758 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.450762 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.450766 | orchestrator | 2025-07-12 20:25:08.450769 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-07-12 20:25:08.450773 | orchestrator | Saturday 12 July 2025 20:24:04 +0000 (0:00:00.363) 0:10:47.671 ********* 2025-07-12 20:25:08.450777 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450780 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450784 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450788 | orchestrator | 2025-07-12 20:25:08.450791 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-07-12 20:25:08.450795 | orchestrator | Saturday 12 July 2025 20:24:04 +0000 (0:00:00.349) 0:10:48.020 ********* 2025-07-12 20:25:08.450799 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.450802 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.450806 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.450810 | orchestrator | 2025-07-12 20:25:08.450813 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-07-12 20:25:08.450817 | orchestrator | Saturday 12 July 2025 20:24:05 +0000 (0:00:00.803) 0:10:48.824 ********* 2025-07-12 20:25:08.450821 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.450825 | orchestrator | 2025-07-12 20:25:08.450828 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-12 20:25:08.450832 | orchestrator | Saturday 12 July 2025 20:24:05 +0000 (0:00:00.557) 0:10:49.382 ********* 2025-07-12 20:25:08.450836 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.450839 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:25:08.450843 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:25:08.450848 | orchestrator | 2025-07-12 20:25:08.450854 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-12 20:25:08.450860 | orchestrator | Saturday 12 July 2025 20:24:08 +0000 (0:00:02.107) 0:10:51.489 ********* 2025-07-12 20:25:08.450866 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:25:08.450872 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-07-12 20:25:08.450878 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.450884 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:25:08.450890 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-07-12 20:25:08.450895 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.450901 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:25:08.450910 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-07-12 20:25:08.450921 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.450927 | orchestrator | 2025-07-12 20:25:08.450933 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-07-12 20:25:08.450938 | orchestrator | Saturday 12 July 2025 20:24:09 +0000 (0:00:01.471) 0:10:52.960 ********* 2025-07-12 20:25:08.450961 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.450967 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.450973 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.450979 | orchestrator | 2025-07-12 20:25:08.450986 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-07-12 20:25:08.450992 | orchestrator | Saturday 12 July 2025 20:24:09 +0000 (0:00:00.339) 0:10:53.300 ********* 2025-07-12 20:25:08.450998 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.451004 | orchestrator | 2025-07-12 20:25:08.451010 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-07-12 20:25:08.451016 | orchestrator | Saturday 12 July 2025 20:24:10 +0000 (0:00:00.597) 0:10:53.897 ********* 2025-07-12 20:25:08.451027 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.451034 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.451040 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.451046 | orchestrator | 2025-07-12 20:25:08.451052 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-07-12 20:25:08.451058 | orchestrator | Saturday 12 July 2025 20:24:11 +0000 (0:00:01.308) 0:10:55.206 ********* 2025-07-12 20:25:08.451064 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.451071 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 20:25:08.451077 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.451084 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 20:25:08.451090 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.451098 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-07-12 20:25:08.451102 | orchestrator | 2025-07-12 20:25:08.451105 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-07-12 20:25:08.451109 | orchestrator | Saturday 12 July 2025 20:24:16 +0000 (0:00:04.407) 0:10:59.613 ********* 2025-07-12 20:25:08.451113 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.451116 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:25:08.451120 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.451124 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:25:08.451127 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:25:08.451131 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:25:08.451135 | orchestrator | 2025-07-12 20:25:08.451138 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-07-12 20:25:08.451142 | orchestrator | Saturday 12 July 2025 20:24:18 +0000 (0:00:02.332) 0:11:01.946 ********* 2025-07-12 20:25:08.451146 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:25:08.451160 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.451163 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:25:08.451167 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.451171 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:25:08.451175 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.451178 | orchestrator | 2025-07-12 20:25:08.451182 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-07-12 20:25:08.451186 | orchestrator | Saturday 12 July 2025 20:24:19 +0000 (0:00:01.241) 0:11:03.188 ********* 2025-07-12 20:25:08.451190 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-07-12 20:25:08.451193 | orchestrator | 2025-07-12 20:25:08.451197 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-07-12 20:25:08.451201 | orchestrator | Saturday 12 July 2025 20:24:20 +0000 (0:00:00.241) 0:11:03.429 ********* 2025-07-12 20:25:08.451205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:25:08.451213 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:25:08.451217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:25:08.451227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:25:08.451232 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:25:08.451235 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.451239 | orchestrator | 2025-07-12 20:25:08.451243 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-07-12 20:25:08.451247 | orchestrator | Saturday 12 July 2025 20:24:21 +0000 (0:00:01.155) 0:11:04.585 ********* 2025-07-12 20:25:08.451250 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:25:08.451254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:25:08.451258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:25:08.451262 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:25:08.451265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-07-12 20:25:08.451269 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.451273 | orchestrator | 2025-07-12 20:25:08.451276 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-07-12 20:25:08.451280 | orchestrator | Saturday 12 July 2025 20:24:21 +0000 (0:00:00.641) 0:11:05.226 ********* 2025-07-12 20:25:08.451284 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 20:25:08.451288 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 20:25:08.451292 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 20:25:08.451295 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 20:25:08.451299 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-07-12 20:25:08.451303 | orchestrator | 2025-07-12 20:25:08.451307 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-07-12 20:25:08.451310 | orchestrator | Saturday 12 July 2025 20:24:53 +0000 (0:00:31.640) 0:11:36.866 ********* 2025-07-12 20:25:08.451314 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.451318 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.451322 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.451325 | orchestrator | 2025-07-12 20:25:08.451329 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-07-12 20:25:08.451333 | orchestrator | Saturday 12 July 2025 20:24:53 +0000 (0:00:00.371) 0:11:37.238 ********* 2025-07-12 20:25:08.451337 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.451370 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.451374 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.451377 | orchestrator | 2025-07-12 20:25:08.451381 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-07-12 20:25:08.451388 | orchestrator | Saturday 12 July 2025 20:24:54 +0000 (0:00:00.315) 0:11:37.553 ********* 2025-07-12 20:25:08.451392 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.451395 | orchestrator | 2025-07-12 20:25:08.451399 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-07-12 20:25:08.451402 | orchestrator | Saturday 12 July 2025 20:24:55 +0000 (0:00:00.895) 0:11:38.449 ********* 2025-07-12 20:25:08.451406 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.451410 | orchestrator | 2025-07-12 20:25:08.451413 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-07-12 20:25:08.451417 | orchestrator | Saturday 12 July 2025 20:24:55 +0000 (0:00:00.575) 0:11:39.025 ********* 2025-07-12 20:25:08.451421 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.451424 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.451428 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.451432 | orchestrator | 2025-07-12 20:25:08.451435 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-07-12 20:25:08.451439 | orchestrator | Saturday 12 July 2025 20:24:56 +0000 (0:00:01.284) 0:11:40.310 ********* 2025-07-12 20:25:08.451443 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.451446 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.451450 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.451454 | orchestrator | 2025-07-12 20:25:08.451457 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-07-12 20:25:08.451461 | orchestrator | Saturday 12 July 2025 20:24:58 +0000 (0:00:01.487) 0:11:41.798 ********* 2025-07-12 20:25:08.451465 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:25:08.451511 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:25:08.451516 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:25:08.451520 | orchestrator | 2025-07-12 20:25:08.451530 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-07-12 20:25:08.451534 | orchestrator | Saturday 12 July 2025 20:25:00 +0000 (0:00:01.699) 0:11:43.497 ********* 2025-07-12 20:25:08.451542 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.451548 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.451555 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-07-12 20:25:08.451561 | orchestrator | 2025-07-12 20:25:08.451568 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-07-12 20:25:08.451575 | orchestrator | Saturday 12 July 2025 20:25:02 +0000 (0:00:02.587) 0:11:46.085 ********* 2025-07-12 20:25:08.451582 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.451588 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.451594 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.451600 | orchestrator | 2025-07-12 20:25:08.451607 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-07-12 20:25:08.451613 | orchestrator | Saturday 12 July 2025 20:25:03 +0000 (0:00:00.342) 0:11:46.428 ********* 2025-07-12 20:25:08.451620 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:25:08.451627 | orchestrator | 2025-07-12 20:25:08.451633 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-07-12 20:25:08.451639 | orchestrator | Saturday 12 July 2025 20:25:03 +0000 (0:00:00.540) 0:11:46.968 ********* 2025-07-12 20:25:08.451645 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.451652 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.451659 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.451671 | orchestrator | 2025-07-12 20:25:08.451677 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-07-12 20:25:08.451682 | orchestrator | Saturday 12 July 2025 20:25:04 +0000 (0:00:00.578) 0:11:47.547 ********* 2025-07-12 20:25:08.451690 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.451697 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:25:08.451704 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:25:08.451711 | orchestrator | 2025-07-12 20:25:08.451717 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-07-12 20:25:08.451725 | orchestrator | Saturday 12 July 2025 20:25:04 +0000 (0:00:00.366) 0:11:47.913 ********* 2025-07-12 20:25:08.451732 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:25:08.451739 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:25:08.451746 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:25:08.451752 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:25:08.451759 | orchestrator | 2025-07-12 20:25:08.451765 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-07-12 20:25:08.451772 | orchestrator | Saturday 12 July 2025 20:25:05 +0000 (0:00:00.748) 0:11:48.662 ********* 2025-07-12 20:25:08.451806 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:25:08.451811 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:25:08.451815 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:25:08.451819 | orchestrator | 2025-07-12 20:25:08.451822 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:25:08.451826 | orchestrator | testbed-node-0 : ok=141  changed=36  unreachable=0 failed=0 skipped=135  rescued=0 ignored=0 2025-07-12 20:25:08.451831 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-07-12 20:25:08.451834 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-07-12 20:25:08.451838 | orchestrator | testbed-node-3 : ok=186  changed=44  unreachable=0 failed=0 skipped=152  rescued=0 ignored=0 2025-07-12 20:25:08.451842 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-07-12 20:25:08.451845 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-07-12 20:25:08.451849 | orchestrator | 2025-07-12 20:25:08.451853 | orchestrator | 2025-07-12 20:25:08.451857 | orchestrator | 2025-07-12 20:25:08.451860 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:25:08.451864 | orchestrator | Saturday 12 July 2025 20:25:05 +0000 (0:00:00.261) 0:11:48.923 ********* 2025-07-12 20:25:08.451868 | orchestrator | =============================================================================== 2025-07-12 20:25:08.451872 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 81.42s 2025-07-12 20:25:08.451875 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.37s 2025-07-12 20:25:08.451879 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.64s 2025-07-12 20:25:08.451886 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.55s 2025-07-12 20:25:08.451890 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.86s 2025-07-12 20:25:08.451893 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.42s 2025-07-12 20:25:08.451902 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.79s 2025-07-12 20:25:08.451907 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.72s 2025-07-12 20:25:08.451910 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.32s 2025-07-12 20:25:08.451919 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 7.98s 2025-07-12 20:25:08.451923 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.02s 2025-07-12 20:25:08.451926 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.31s 2025-07-12 20:25:08.451930 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.56s 2025-07-12 20:25:08.451934 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.41s 2025-07-12 20:25:08.451938 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.27s 2025-07-12 20:25:08.451941 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.23s 2025-07-12 20:25:08.451945 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.93s 2025-07-12 20:25:08.451949 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.71s 2025-07-12 20:25:08.451952 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.25s 2025-07-12 20:25:08.451956 | orchestrator | ceph-config : Generate Ceph file ---------------------------------------- 3.23s 2025-07-12 20:25:08.451959 | orchestrator | 2025-07-12 20:25:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:11.483520 | orchestrator | 2025-07-12 20:25:11 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:11.485650 | orchestrator | 2025-07-12 20:25:11 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:11.487098 | orchestrator | 2025-07-12 20:25:11 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:11.487133 | orchestrator | 2025-07-12 20:25:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:14.533140 | orchestrator | 2025-07-12 20:25:14 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:14.533760 | orchestrator | 2025-07-12 20:25:14 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:14.535035 | orchestrator | 2025-07-12 20:25:14 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:14.535208 | orchestrator | 2025-07-12 20:25:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:17.586706 | orchestrator | 2025-07-12 20:25:17 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:17.588958 | orchestrator | 2025-07-12 20:25:17 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:17.590760 | orchestrator | 2025-07-12 20:25:17 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:17.590799 | orchestrator | 2025-07-12 20:25:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:20.642120 | orchestrator | 2025-07-12 20:25:20 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:20.643297 | orchestrator | 2025-07-12 20:25:20 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:20.645579 | orchestrator | 2025-07-12 20:25:20 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:20.645629 | orchestrator | 2025-07-12 20:25:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:23.689259 | orchestrator | 2025-07-12 20:25:23 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:23.690813 | orchestrator | 2025-07-12 20:25:23 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:23.693106 | orchestrator | 2025-07-12 20:25:23 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:23.693732 | orchestrator | 2025-07-12 20:25:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:26.743453 | orchestrator | 2025-07-12 20:25:26 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:26.748020 | orchestrator | 2025-07-12 20:25:26 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:26.748398 | orchestrator | 2025-07-12 20:25:26 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:26.748430 | orchestrator | 2025-07-12 20:25:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:29.793409 | orchestrator | 2025-07-12 20:25:29 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:29.795017 | orchestrator | 2025-07-12 20:25:29 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:29.797222 | orchestrator | 2025-07-12 20:25:29 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:29.797280 | orchestrator | 2025-07-12 20:25:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:32.853281 | orchestrator | 2025-07-12 20:25:32 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:32.855481 | orchestrator | 2025-07-12 20:25:32 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:32.857844 | orchestrator | 2025-07-12 20:25:32 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:32.858089 | orchestrator | 2025-07-12 20:25:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:35.908504 | orchestrator | 2025-07-12 20:25:35 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:35.910146 | orchestrator | 2025-07-12 20:25:35 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:35.911315 | orchestrator | 2025-07-12 20:25:35 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:35.911565 | orchestrator | 2025-07-12 20:25:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:38.960728 | orchestrator | 2025-07-12 20:25:38 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:38.961606 | orchestrator | 2025-07-12 20:25:38 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:38.965170 | orchestrator | 2025-07-12 20:25:38 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:38.965581 | orchestrator | 2025-07-12 20:25:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:42.013506 | orchestrator | 2025-07-12 20:25:42 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:42.014852 | orchestrator | 2025-07-12 20:25:42 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:42.017038 | orchestrator | 2025-07-12 20:25:42 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:42.017266 | orchestrator | 2025-07-12 20:25:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:45.075622 | orchestrator | 2025-07-12 20:25:45 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:45.075872 | orchestrator | 2025-07-12 20:25:45 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:45.075893 | orchestrator | 2025-07-12 20:25:45 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:45.076026 | orchestrator | 2025-07-12 20:25:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:48.122292 | orchestrator | 2025-07-12 20:25:48 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:48.122619 | orchestrator | 2025-07-12 20:25:48 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:48.124047 | orchestrator | 2025-07-12 20:25:48 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:48.124086 | orchestrator | 2025-07-12 20:25:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:51.176084 | orchestrator | 2025-07-12 20:25:51 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:51.177973 | orchestrator | 2025-07-12 20:25:51 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:51.180122 | orchestrator | 2025-07-12 20:25:51 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:51.180157 | orchestrator | 2025-07-12 20:25:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:54.229094 | orchestrator | 2025-07-12 20:25:54 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:54.233035 | orchestrator | 2025-07-12 20:25:54 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:54.235117 | orchestrator | 2025-07-12 20:25:54 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:54.235153 | orchestrator | 2025-07-12 20:25:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:25:57.278852 | orchestrator | 2025-07-12 20:25:57 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:25:57.280104 | orchestrator | 2025-07-12 20:25:57 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:25:57.282389 | orchestrator | 2025-07-12 20:25:57 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:25:57.282898 | orchestrator | 2025-07-12 20:25:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:00.337373 | orchestrator | 2025-07-12 20:26:00 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:00.338858 | orchestrator | 2025-07-12 20:26:00 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:26:00.339185 | orchestrator | 2025-07-12 20:26:00 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:26:00.339217 | orchestrator | 2025-07-12 20:26:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:03.390628 | orchestrator | 2025-07-12 20:26:03 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:03.392745 | orchestrator | 2025-07-12 20:26:03 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:26:03.394640 | orchestrator | 2025-07-12 20:26:03 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:26:03.394692 | orchestrator | 2025-07-12 20:26:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:06.436604 | orchestrator | 2025-07-12 20:26:06 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:06.438541 | orchestrator | 2025-07-12 20:26:06 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state STARTED 2025-07-12 20:26:06.441686 | orchestrator | 2025-07-12 20:26:06 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:26:06.441763 | orchestrator | 2025-07-12 20:26:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:09.486853 | orchestrator | 2025-07-12 20:26:09 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:09.487530 | orchestrator | 2025-07-12 20:26:09 | INFO  | Task 90e05732-99ac-428f-b457-4ca1c7a0b1ee is in state SUCCESS 2025-07-12 20:26:09.487573 | orchestrator | 2025-07-12 20:26:09.489676 | orchestrator | 2025-07-12 20:26:09.489720 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:26:09.489730 | orchestrator | 2025-07-12 20:26:09.489737 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:26:09.489744 | orchestrator | Saturday 12 July 2025 20:22:56 +0000 (0:00:00.273) 0:00:00.273 ********* 2025-07-12 20:26:09.489750 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:09.489776 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:26:09.489783 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:26:09.489789 | orchestrator | 2025-07-12 20:26:09.489864 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:26:09.489873 | orchestrator | Saturday 12 July 2025 20:22:56 +0000 (0:00:00.294) 0:00:00.568 ********* 2025-07-12 20:26:09.489881 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-07-12 20:26:09.489887 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-07-12 20:26:09.489894 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-07-12 20:26:09.489901 | orchestrator | 2025-07-12 20:26:09.489907 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-07-12 20:26:09.489914 | orchestrator | 2025-07-12 20:26:09.489920 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 20:26:09.489927 | orchestrator | Saturday 12 July 2025 20:22:57 +0000 (0:00:00.448) 0:00:01.016 ********* 2025-07-12 20:26:09.489934 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:26:09.489940 | orchestrator | 2025-07-12 20:26:09.489946 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-07-12 20:26:09.489952 | orchestrator | Saturday 12 July 2025 20:22:57 +0000 (0:00:00.556) 0:00:01.572 ********* 2025-07-12 20:26:09.489959 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 20:26:09.489966 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 20:26:09.489972 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-07-12 20:26:09.489978 | orchestrator | 2025-07-12 20:26:09.489984 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-07-12 20:26:09.489991 | orchestrator | Saturday 12 July 2025 20:22:58 +0000 (0:00:00.711) 0:00:02.284 ********* 2025-07-12 20:26:09.490180 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490198 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490236 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490283 | orchestrator | 2025-07-12 20:26:09.490290 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 20:26:09.490322 | orchestrator | Saturday 12 July 2025 20:23:00 +0000 (0:00:01.982) 0:00:04.266 ********* 2025-07-12 20:26:09.490329 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:26:09.490336 | orchestrator | 2025-07-12 20:26:09.490343 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-07-12 20:26:09.490350 | orchestrator | Saturday 12 July 2025 20:23:01 +0000 (0:00:00.550) 0:00:04.817 ********* 2025-07-12 20:26:09.490367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490387 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490430 | orchestrator | 2025-07-12 20:26:09.490436 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-07-12 20:26:09.490442 | orchestrator | Saturday 12 July 2025 20:23:04 +0000 (0:00:02.939) 0:00:07.756 ********* 2025-07-12 20:26:09.490452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:26:09.490465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:26:09.490472 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:09.490485 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:26:09.490492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:26:09.490499 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:09.490505 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:26:09.490519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:26:09.490527 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:09.490532 | orchestrator | 2025-07-12 20:26:09.490538 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-07-12 20:26:09.490545 | orchestrator | Saturday 12 July 2025 20:23:05 +0000 (0:00:01.632) 0:00:09.389 ********* 2025-07-12 20:26:09.490557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:26:09.490565 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:26:09.490572 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:09.490578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:26:09.490594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:26:09.490602 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:09.490612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-07-12 20:26:09.490618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-07-12 20:26:09.490624 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:09.490630 | orchestrator | 2025-07-12 20:26:09.490636 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-07-12 20:26:09.490641 | orchestrator | Saturday 12 July 2025 20:23:06 +0000 (0:00:01.197) 0:00:10.587 ********* 2025-07-12 20:26:09.490647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490706 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490714 | orchestrator | 2025-07-12 20:26:09.490721 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-07-12 20:26:09.490728 | orchestrator | Saturday 12 July 2025 20:23:09 +0000 (0:00:02.378) 0:00:12.965 ********* 2025-07-12 20:26:09.490735 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:09.490742 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:26:09.490749 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:26:09.490756 | orchestrator | 2025-07-12 20:26:09.490763 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-07-12 20:26:09.490771 | orchestrator | Saturday 12 July 2025 20:23:13 +0000 (0:00:04.055) 0:00:17.021 ********* 2025-07-12 20:26:09.490778 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:09.490786 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:26:09.490794 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:26:09.490802 | orchestrator | 2025-07-12 20:26:09.490810 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-07-12 20:26:09.490818 | orchestrator | Saturday 12 July 2025 20:23:15 +0000 (0:00:01.764) 0:00:18.785 ********* 2025-07-12 20:26:09.490832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490840 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.19.2.20250711', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-07-12 20:26:09.490863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-07-12 20:26:09.490894 | orchestrator | 2025-07-12 20:26:09.490901 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 20:26:09.490908 | orchestrator | Saturday 12 July 2025 20:23:17 +0000 (0:00:02.594) 0:00:21.380 ********* 2025-07-12 20:26:09.490916 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:09.490922 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:09.490929 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:09.490936 | orchestrator | 2025-07-12 20:26:09.490942 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 20:26:09.490949 | orchestrator | Saturday 12 July 2025 20:23:17 +0000 (0:00:00.321) 0:00:21.701 ********* 2025-07-12 20:26:09.490957 | orchestrator | 2025-07-12 20:26:09.490964 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 20:26:09.490971 | orchestrator | Saturday 12 July 2025 20:23:18 +0000 (0:00:00.074) 0:00:21.775 ********* 2025-07-12 20:26:09.490979 | orchestrator | 2025-07-12 20:26:09.490987 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-07-12 20:26:09.490994 | orchestrator | Saturday 12 July 2025 20:23:18 +0000 (0:00:00.069) 0:00:21.845 ********* 2025-07-12 20:26:09.491002 | orchestrator | 2025-07-12 20:26:09.491009 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-07-12 20:26:09.491016 | orchestrator | Saturday 12 July 2025 20:23:18 +0000 (0:00:00.276) 0:00:22.122 ********* 2025-07-12 20:26:09.491027 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:09.491034 | orchestrator | 2025-07-12 20:26:09.491041 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-07-12 20:26:09.491049 | orchestrator | Saturday 12 July 2025 20:23:18 +0000 (0:00:00.234) 0:00:22.356 ********* 2025-07-12 20:26:09.491057 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:09.491064 | orchestrator | 2025-07-12 20:26:09.491071 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-07-12 20:26:09.491079 | orchestrator | Saturday 12 July 2025 20:23:18 +0000 (0:00:00.224) 0:00:22.581 ********* 2025-07-12 20:26:09.491086 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:09.491092 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:26:09.491100 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:26:09.491107 | orchestrator | 2025-07-12 20:26:09.491114 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-07-12 20:26:09.491122 | orchestrator | Saturday 12 July 2025 20:24:34 +0000 (0:01:15.266) 0:01:37.848 ********* 2025-07-12 20:26:09.491130 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:09.491137 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:26:09.491144 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:26:09.491152 | orchestrator | 2025-07-12 20:26:09.491159 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-07-12 20:26:09.491165 | orchestrator | Saturday 12 July 2025 20:25:57 +0000 (0:01:23.216) 0:03:01.065 ********* 2025-07-12 20:26:09.491172 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:26:09.491178 | orchestrator | 2025-07-12 20:26:09.491184 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-07-12 20:26:09.491190 | orchestrator | Saturday 12 July 2025 20:25:58 +0000 (0:00:00.707) 0:03:01.773 ********* 2025-07-12 20:26:09.491197 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:09.491203 | orchestrator | 2025-07-12 20:26:09.491209 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-07-12 20:26:09.491215 | orchestrator | Saturday 12 July 2025 20:26:00 +0000 (0:00:02.374) 0:03:04.148 ********* 2025-07-12 20:26:09.491227 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:09.491234 | orchestrator | 2025-07-12 20:26:09.491241 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-07-12 20:26:09.491247 | orchestrator | Saturday 12 July 2025 20:26:02 +0000 (0:00:02.331) 0:03:06.479 ********* 2025-07-12 20:26:09.491254 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:09.491260 | orchestrator | 2025-07-12 20:26:09.491266 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-07-12 20:26:09.491272 | orchestrator | Saturday 12 July 2025 20:26:05 +0000 (0:00:02.693) 0:03:09.173 ********* 2025-07-12 20:26:09.491277 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:09.491282 | orchestrator | 2025-07-12 20:26:09.491313 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:26:09.491323 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:26:09.491330 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:26:09.491336 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:26:09.491343 | orchestrator | 2025-07-12 20:26:09.491349 | orchestrator | 2025-07-12 20:26:09.491354 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:26:09.491361 | orchestrator | Saturday 12 July 2025 20:26:07 +0000 (0:00:02.467) 0:03:11.641 ********* 2025-07-12 20:26:09.491368 | orchestrator | =============================================================================== 2025-07-12 20:26:09.491375 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.22s 2025-07-12 20:26:09.491382 | orchestrator | opensearch : Restart opensearch container ------------------------------ 75.27s 2025-07-12 20:26:09.491389 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 4.06s 2025-07-12 20:26:09.491396 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.94s 2025-07-12 20:26:09.491403 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.69s 2025-07-12 20:26:09.491410 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.59s 2025-07-12 20:26:09.491416 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.47s 2025-07-12 20:26:09.491423 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.38s 2025-07-12 20:26:09.491429 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.37s 2025-07-12 20:26:09.491435 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.33s 2025-07-12 20:26:09.491441 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.98s 2025-07-12 20:26:09.491446 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.76s 2025-07-12 20:26:09.491452 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.63s 2025-07-12 20:26:09.491458 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.20s 2025-07-12 20:26:09.491463 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.71s 2025-07-12 20:26:09.491469 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-07-12 20:26:09.491475 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2025-07-12 20:26:09.491486 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.55s 2025-07-12 20:26:09.491492 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-07-12 20:26:09.491498 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.42s 2025-07-12 20:26:09.492448 | orchestrator | 2025-07-12 20:26:09 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:26:09.492583 | orchestrator | 2025-07-12 20:26:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:12.536446 | orchestrator | 2025-07-12 20:26:12 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:12.538626 | orchestrator | 2025-07-12 20:26:12 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state STARTED 2025-07-12 20:26:12.538640 | orchestrator | 2025-07-12 20:26:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:15.585405 | orchestrator | 2025-07-12 20:26:15 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:15.586690 | orchestrator | 2025-07-12 20:26:15 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:15.587888 | orchestrator | 2025-07-12 20:26:15 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:15.590403 | orchestrator | 2025-07-12 20:26:15 | INFO  | Task 3de194d2-ad94-4fc1-95f9-7f84d5629327 is in state SUCCESS 2025-07-12 20:26:15.591946 | orchestrator | 2025-07-12 20:26:15.591967 | orchestrator | 2025-07-12 20:26:15.591973 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-07-12 20:26:15.591978 | orchestrator | 2025-07-12 20:26:15.591982 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-07-12 20:26:15.591987 | orchestrator | Saturday 12 July 2025 20:22:56 +0000 (0:00:00.104) 0:00:00.104 ********* 2025-07-12 20:26:15.591991 | orchestrator | ok: [localhost] => { 2025-07-12 20:26:15.591997 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-07-12 20:26:15.592001 | orchestrator | } 2025-07-12 20:26:15.592005 | orchestrator | 2025-07-12 20:26:15.592009 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-07-12 20:26:15.592013 | orchestrator | Saturday 12 July 2025 20:22:56 +0000 (0:00:00.056) 0:00:00.161 ********* 2025-07-12 20:26:15.592017 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-07-12 20:26:15.592022 | orchestrator | ...ignoring 2025-07-12 20:26:15.592026 | orchestrator | 2025-07-12 20:26:15.592030 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-07-12 20:26:15.592034 | orchestrator | Saturday 12 July 2025 20:22:59 +0000 (0:00:02.986) 0:00:03.147 ********* 2025-07-12 20:26:15.592037 | orchestrator | skipping: [localhost] 2025-07-12 20:26:15.592041 | orchestrator | 2025-07-12 20:26:15.592045 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-07-12 20:26:15.592048 | orchestrator | Saturday 12 July 2025 20:22:59 +0000 (0:00:00.066) 0:00:03.213 ********* 2025-07-12 20:26:15.592052 | orchestrator | ok: [localhost] 2025-07-12 20:26:15.592056 | orchestrator | 2025-07-12 20:26:15.592059 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:26:15.592063 | orchestrator | 2025-07-12 20:26:15.592067 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:26:15.592071 | orchestrator | Saturday 12 July 2025 20:22:59 +0000 (0:00:00.187) 0:00:03.401 ********* 2025-07-12 20:26:15.592074 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.592078 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:26:15.592082 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:26:15.592085 | orchestrator | 2025-07-12 20:26:15.592089 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:26:15.592093 | orchestrator | Saturday 12 July 2025 20:23:00 +0000 (0:00:00.335) 0:00:03.736 ********* 2025-07-12 20:26:15.592096 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 20:26:15.592101 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 20:26:15.592105 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 20:26:15.593470 | orchestrator | 2025-07-12 20:26:15.593485 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 20:26:15.593489 | orchestrator | 2025-07-12 20:26:15.593493 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 20:26:15.593497 | orchestrator | Saturday 12 July 2025 20:23:00 +0000 (0:00:00.835) 0:00:04.572 ********* 2025-07-12 20:26:15.593501 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:26:15.593505 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 20:26:15.593509 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 20:26:15.593512 | orchestrator | 2025-07-12 20:26:15.593516 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:26:15.593520 | orchestrator | Saturday 12 July 2025 20:23:01 +0000 (0:00:00.509) 0:00:05.081 ********* 2025-07-12 20:26:15.593524 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:26:15.593529 | orchestrator | 2025-07-12 20:26:15.593533 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-07-12 20:26:15.593536 | orchestrator | Saturday 12 July 2025 20:23:02 +0000 (0:00:00.633) 0:00:05.715 ********* 2025-07-12 20:26:15.593586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:26:15.593601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:26:15.593614 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:26:15.593619 | orchestrator | 2025-07-12 20:26:15.594832 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-07-12 20:26:15.594854 | orchestrator | Saturday 12 July 2025 20:23:05 +0000 (0:00:03.415) 0:00:09.131 ********* 2025-07-12 20:26:15.594858 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.594862 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.594866 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.594870 | orchestrator | 2025-07-12 20:26:15.594874 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-07-12 20:26:15.594878 | orchestrator | Saturday 12 July 2025 20:23:06 +0000 (0:00:00.846) 0:00:09.978 ********* 2025-07-12 20:26:15.594882 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.594886 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.594890 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.594893 | orchestrator | 2025-07-12 20:26:15.594897 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-07-12 20:26:15.594901 | orchestrator | Saturday 12 July 2025 20:23:07 +0000 (0:00:01.625) 0:00:11.603 ********* 2025-07-12 20:26:15.594907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:26:15.594936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:26:15.594941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:26:15.594949 | orchestrator | 2025-07-12 20:26:15.594953 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-07-12 20:26:15.594957 | orchestrator | Saturday 12 July 2025 20:23:12 +0000 (0:00:04.968) 0:00:16.572 ********* 2025-07-12 20:26:15.594960 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.594964 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.594968 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.594971 | orchestrator | 2025-07-12 20:26:15.594975 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-07-12 20:26:15.594979 | orchestrator | Saturday 12 July 2025 20:23:14 +0000 (0:00:01.190) 0:00:17.763 ********* 2025-07-12 20:26:15.594983 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.594986 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:26:15.594990 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:26:15.594993 | orchestrator | 2025-07-12 20:26:15.594997 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:26:15.595001 | orchestrator | Saturday 12 July 2025 20:23:18 +0000 (0:00:04.829) 0:00:22.592 ********* 2025-07-12 20:26:15.595008 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:26:15.595013 | orchestrator | 2025-07-12 20:26:15.595017 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-07-12 20:26:15.595020 | orchestrator | Saturday 12 July 2025 20:23:19 +0000 (0:00:00.939) 0:00:23.531 ********* 2025-07-12 20:26:15.595030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:26:15.595038 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.595042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:26:15.595046 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.596301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:26:15.596331 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.596336 | orchestrator | 2025-07-12 20:26:15.596340 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-07-12 20:26:15.596344 | orchestrator | Saturday 12 July 2025 20:23:22 +0000 (0:00:02.970) 0:00:26.502 ********* 2025-07-12 20:26:15.596348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:26:15.596352 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.596368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:26:15.596376 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.596380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:26:15.596384 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.596388 | orchestrator | 2025-07-12 20:26:15.596392 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-07-12 20:26:15.596395 | orchestrator | Saturday 12 July 2025 20:23:25 +0000 (0:00:02.670) 0:00:29.173 ********* 2025-07-12 20:26:15.596406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:26:15.596426 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.596431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:26:15.596435 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.596442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-07-12 20:26:15.596449 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.596453 | orchestrator | 2025-07-12 20:26:15.596457 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-07-12 20:26:15.596460 | orchestrator | Saturday 12 July 2025 20:23:28 +0000 (0:00:03.002) 0:00:32.176 ********* 2025-07-12 20:26:15.596469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:26:15.596482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:26:15.596501 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-07-12 20:26:15.596506 | orchestrator | 2025-07-12 20:26:15.596510 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-07-12 20:26:15.596514 | orchestrator | Saturday 12 July 2025 20:23:31 +0000 (0:00:03.134) 0:00:35.310 ********* 2025-07-12 20:26:15.596517 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.596521 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:26:15.596525 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:26:15.596528 | orchestrator | 2025-07-12 20:26:15.596532 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-07-12 20:26:15.596536 | orchestrator | Saturday 12 July 2025 20:23:32 +0000 (0:00:01.222) 0:00:36.532 ********* 2025-07-12 20:26:15.596540 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.596544 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:26:15.596548 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:26:15.596551 | orchestrator | 2025-07-12 20:26:15.596555 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-07-12 20:26:15.596559 | orchestrator | Saturday 12 July 2025 20:23:33 +0000 (0:00:00.356) 0:00:36.889 ********* 2025-07-12 20:26:15.596563 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.596566 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:26:15.596570 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:26:15.596574 | orchestrator | 2025-07-12 20:26:15.596577 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-07-12 20:26:15.596581 | orchestrator | Saturday 12 July 2025 20:23:33 +0000 (0:00:00.349) 0:00:37.239 ********* 2025-07-12 20:26:15.596586 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-07-12 20:26:15.596591 | orchestrator | ...ignoring 2025-07-12 20:26:15.596595 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-07-12 20:26:15.596599 | orchestrator | ...ignoring 2025-07-12 20:26:15.596603 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-07-12 20:26:15.596613 | orchestrator | ...ignoring 2025-07-12 20:26:15.596616 | orchestrator | 2025-07-12 20:26:15.596620 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-07-12 20:26:15.596624 | orchestrator | Saturday 12 July 2025 20:23:44 +0000 (0:00:10.881) 0:00:48.120 ********* 2025-07-12 20:26:15.596628 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.596631 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:26:15.596635 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:26:15.596639 | orchestrator | 2025-07-12 20:26:15.596642 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-07-12 20:26:15.596646 | orchestrator | Saturday 12 July 2025 20:23:45 +0000 (0:00:00.698) 0:00:48.819 ********* 2025-07-12 20:26:15.596650 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.596653 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.596657 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.596661 | orchestrator | 2025-07-12 20:26:15.596665 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-07-12 20:26:15.596668 | orchestrator | Saturday 12 July 2025 20:23:45 +0000 (0:00:00.463) 0:00:49.282 ********* 2025-07-12 20:26:15.596672 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.596676 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.596679 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.596683 | orchestrator | 2025-07-12 20:26:15.596687 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-07-12 20:26:15.596690 | orchestrator | Saturday 12 July 2025 20:23:46 +0000 (0:00:00.463) 0:00:49.745 ********* 2025-07-12 20:26:15.596694 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.596698 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.596702 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.596705 | orchestrator | 2025-07-12 20:26:15.596709 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-07-12 20:26:15.596715 | orchestrator | Saturday 12 July 2025 20:23:46 +0000 (0:00:00.508) 0:00:50.254 ********* 2025-07-12 20:26:15.596719 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.596723 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:26:15.596727 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:26:15.596730 | orchestrator | 2025-07-12 20:26:15.596734 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-07-12 20:26:15.596738 | orchestrator | Saturday 12 July 2025 20:23:47 +0000 (0:00:00.679) 0:00:50.933 ********* 2025-07-12 20:26:15.596741 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.596745 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.596749 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.596753 | orchestrator | 2025-07-12 20:26:15.596756 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:26:15.596760 | orchestrator | Saturday 12 July 2025 20:23:47 +0000 (0:00:00.455) 0:00:51.389 ********* 2025-07-12 20:26:15.596764 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.596767 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.596771 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-07-12 20:26:15.596775 | orchestrator | 2025-07-12 20:26:15.596778 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-07-12 20:26:15.596782 | orchestrator | Saturday 12 July 2025 20:23:48 +0000 (0:00:00.393) 0:00:51.783 ********* 2025-07-12 20:26:15.596786 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.596789 | orchestrator | 2025-07-12 20:26:15.596793 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-07-12 20:26:15.596797 | orchestrator | Saturday 12 July 2025 20:23:58 +0000 (0:00:10.350) 0:01:02.133 ********* 2025-07-12 20:26:15.596801 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.596804 | orchestrator | 2025-07-12 20:26:15.596808 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:26:15.596815 | orchestrator | Saturday 12 July 2025 20:23:58 +0000 (0:00:00.128) 0:01:02.261 ********* 2025-07-12 20:26:15.596819 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.596822 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.596826 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.596830 | orchestrator | 2025-07-12 20:26:15.596834 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-07-12 20:26:15.596837 | orchestrator | Saturday 12 July 2025 20:23:59 +0000 (0:00:01.111) 0:01:03.373 ********* 2025-07-12 20:26:15.596841 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.596845 | orchestrator | 2025-07-12 20:26:15.596848 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-07-12 20:26:15.596852 | orchestrator | Saturday 12 July 2025 20:24:07 +0000 (0:00:08.093) 0:01:11.466 ********* 2025-07-12 20:26:15.596856 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.596859 | orchestrator | 2025-07-12 20:26:15.596863 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-07-12 20:26:15.596867 | orchestrator | Saturday 12 July 2025 20:24:09 +0000 (0:00:01.611) 0:01:13.078 ********* 2025-07-12 20:26:15.596870 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.596874 | orchestrator | 2025-07-12 20:26:15.596878 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-07-12 20:26:15.596882 | orchestrator | Saturday 12 July 2025 20:24:12 +0000 (0:00:02.642) 0:01:15.720 ********* 2025-07-12 20:26:15.596885 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.596889 | orchestrator | 2025-07-12 20:26:15.596893 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-07-12 20:26:15.596896 | orchestrator | Saturday 12 July 2025 20:24:12 +0000 (0:00:00.116) 0:01:15.836 ********* 2025-07-12 20:26:15.596900 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.596904 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.596907 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.596911 | orchestrator | 2025-07-12 20:26:15.596915 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-07-12 20:26:15.596918 | orchestrator | Saturday 12 July 2025 20:24:12 +0000 (0:00:00.545) 0:01:16.382 ********* 2025-07-12 20:26:15.596922 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.596926 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 20:26:15.596929 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:26:15.596933 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:26:15.596937 | orchestrator | 2025-07-12 20:26:15.596943 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 20:26:15.596947 | orchestrator | skipping: no hosts matched 2025-07-12 20:26:15.596950 | orchestrator | 2025-07-12 20:26:15.596954 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 20:26:15.596958 | orchestrator | 2025-07-12 20:26:15.596961 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 20:26:15.596965 | orchestrator | Saturday 12 July 2025 20:24:13 +0000 (0:00:00.346) 0:01:16.728 ********* 2025-07-12 20:26:15.596969 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:26:15.596973 | orchestrator | 2025-07-12 20:26:15.596976 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 20:26:15.596980 | orchestrator | Saturday 12 July 2025 20:24:32 +0000 (0:00:19.819) 0:01:36.548 ********* 2025-07-12 20:26:15.596984 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:26:15.596987 | orchestrator | 2025-07-12 20:26:15.596991 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 20:26:15.596995 | orchestrator | Saturday 12 July 2025 20:24:53 +0000 (0:00:20.596) 0:01:57.145 ********* 2025-07-12 20:26:15.596998 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:26:15.597002 | orchestrator | 2025-07-12 20:26:15.597006 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 20:26:15.597012 | orchestrator | 2025-07-12 20:26:15.597016 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 20:26:15.597020 | orchestrator | Saturday 12 July 2025 20:24:56 +0000 (0:00:02.747) 0:01:59.892 ********* 2025-07-12 20:26:15.597023 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:26:15.597027 | orchestrator | 2025-07-12 20:26:15.597031 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 20:26:15.597037 | orchestrator | Saturday 12 July 2025 20:25:21 +0000 (0:00:25.779) 0:02:25.672 ********* 2025-07-12 20:26:15.597041 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:26:15.597045 | orchestrator | 2025-07-12 20:26:15.597049 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 20:26:15.597052 | orchestrator | Saturday 12 July 2025 20:25:37 +0000 (0:00:15.582) 0:02:41.255 ********* 2025-07-12 20:26:15.597056 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:26:15.597060 | orchestrator | 2025-07-12 20:26:15.597063 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 20:26:15.597067 | orchestrator | 2025-07-12 20:26:15.597071 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-07-12 20:26:15.597074 | orchestrator | Saturday 12 July 2025 20:25:40 +0000 (0:00:02.824) 0:02:44.079 ********* 2025-07-12 20:26:15.597078 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.597082 | orchestrator | 2025-07-12 20:26:15.597085 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-07-12 20:26:15.597089 | orchestrator | Saturday 12 July 2025 20:25:52 +0000 (0:00:12.161) 0:02:56.241 ********* 2025-07-12 20:26:15.597093 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.597096 | orchestrator | 2025-07-12 20:26:15.597100 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-07-12 20:26:15.597104 | orchestrator | Saturday 12 July 2025 20:25:57 +0000 (0:00:04.636) 0:03:00.877 ********* 2025-07-12 20:26:15.597107 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.597111 | orchestrator | 2025-07-12 20:26:15.597115 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 20:26:15.597119 | orchestrator | 2025-07-12 20:26:15.597122 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 20:26:15.597126 | orchestrator | Saturday 12 July 2025 20:25:59 +0000 (0:00:02.501) 0:03:03.379 ********* 2025-07-12 20:26:15.597130 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:26:15.597133 | orchestrator | 2025-07-12 20:26:15.597137 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-07-12 20:26:15.597141 | orchestrator | Saturday 12 July 2025 20:26:00 +0000 (0:00:00.528) 0:03:03.908 ********* 2025-07-12 20:26:15.597144 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.597148 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.597152 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.597156 | orchestrator | 2025-07-12 20:26:15.597159 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-07-12 20:26:15.597163 | orchestrator | Saturday 12 July 2025 20:26:02 +0000 (0:00:02.557) 0:03:06.466 ********* 2025-07-12 20:26:15.597167 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.597170 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.597174 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.597178 | orchestrator | 2025-07-12 20:26:15.597181 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-07-12 20:26:15.597185 | orchestrator | Saturday 12 July 2025 20:26:04 +0000 (0:00:02.138) 0:03:08.604 ********* 2025-07-12 20:26:15.597189 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.597192 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.597196 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.597200 | orchestrator | 2025-07-12 20:26:15.597203 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-07-12 20:26:15.597207 | orchestrator | Saturday 12 July 2025 20:26:07 +0000 (0:00:02.139) 0:03:10.743 ********* 2025-07-12 20:26:15.597214 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.597218 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.597221 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:26:15.597225 | orchestrator | 2025-07-12 20:26:15.597229 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-07-12 20:26:15.597233 | orchestrator | Saturday 12 July 2025 20:26:09 +0000 (0:00:02.042) 0:03:12.786 ********* 2025-07-12 20:26:15.597236 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:26:15.597240 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:26:15.597244 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:26:15.597247 | orchestrator | 2025-07-12 20:26:15.597251 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 20:26:15.597255 | orchestrator | Saturday 12 July 2025 20:26:12 +0000 (0:00:03.174) 0:03:15.960 ********* 2025-07-12 20:26:15.597258 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:26:15.597262 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:26:15.597266 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:26:15.597269 | orchestrator | 2025-07-12 20:26:15.597276 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:26:15.597280 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-07-12 20:26:15.597284 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-07-12 20:26:15.597302 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-12 20:26:15.597306 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-07-12 20:26:15.597310 | orchestrator | 2025-07-12 20:26:15.597313 | orchestrator | 2025-07-12 20:26:15.597317 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:26:15.597321 | orchestrator | Saturday 12 July 2025 20:26:12 +0000 (0:00:00.221) 0:03:16.182 ********* 2025-07-12 20:26:15.597325 | orchestrator | =============================================================================== 2025-07-12 20:26:15.597328 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 45.60s 2025-07-12 20:26:15.597332 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.18s 2025-07-12 20:26:15.597338 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.16s 2025-07-12 20:26:15.597342 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.88s 2025-07-12 20:26:15.597346 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.35s 2025-07-12 20:26:15.597350 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.09s 2025-07-12 20:26:15.597353 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.57s 2025-07-12 20:26:15.597357 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.97s 2025-07-12 20:26:15.597361 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.83s 2025-07-12 20:26:15.597364 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.64s 2025-07-12 20:26:15.597368 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.42s 2025-07-12 20:26:15.597372 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.17s 2025-07-12 20:26:15.597375 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.13s 2025-07-12 20:26:15.597379 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.00s 2025-07-12 20:26:15.597383 | orchestrator | Check MariaDB service --------------------------------------------------- 2.99s 2025-07-12 20:26:15.597386 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.97s 2025-07-12 20:26:15.597393 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.67s 2025-07-12 20:26:15.597397 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.64s 2025-07-12 20:26:15.597400 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.56s 2025-07-12 20:26:15.597404 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.50s 2025-07-12 20:26:15.597408 | orchestrator | 2025-07-12 20:26:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:18.638505 | orchestrator | 2025-07-12 20:26:18 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:18.639936 | orchestrator | 2025-07-12 20:26:18 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:18.639942 | orchestrator | 2025-07-12 20:26:18 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:18.639947 | orchestrator | 2025-07-12 20:26:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:21.694863 | orchestrator | 2025-07-12 20:26:21 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:21.698310 | orchestrator | 2025-07-12 20:26:21 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:21.700369 | orchestrator | 2025-07-12 20:26:21 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:21.700377 | orchestrator | 2025-07-12 20:26:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:24.737929 | orchestrator | 2025-07-12 20:26:24 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:24.738842 | orchestrator | 2025-07-12 20:26:24 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:24.739990 | orchestrator | 2025-07-12 20:26:24 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:24.740193 | orchestrator | 2025-07-12 20:26:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:27.779032 | orchestrator | 2025-07-12 20:26:27 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:27.781062 | orchestrator | 2025-07-12 20:26:27 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:27.783210 | orchestrator | 2025-07-12 20:26:27 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:27.783241 | orchestrator | 2025-07-12 20:26:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:30.826267 | orchestrator | 2025-07-12 20:26:30 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:30.826565 | orchestrator | 2025-07-12 20:26:30 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:30.827487 | orchestrator | 2025-07-12 20:26:30 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:30.827576 | orchestrator | 2025-07-12 20:26:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:33.869416 | orchestrator | 2025-07-12 20:26:33 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:33.871604 | orchestrator | 2025-07-12 20:26:33 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:33.875938 | orchestrator | 2025-07-12 20:26:33 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:33.875983 | orchestrator | 2025-07-12 20:26:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:36.925520 | orchestrator | 2025-07-12 20:26:36 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:36.926972 | orchestrator | 2025-07-12 20:26:36 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:36.928784 | orchestrator | 2025-07-12 20:26:36 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:36.928933 | orchestrator | 2025-07-12 20:26:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:39.963682 | orchestrator | 2025-07-12 20:26:39 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:39.963867 | orchestrator | 2025-07-12 20:26:39 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:39.964591 | orchestrator | 2025-07-12 20:26:39 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:39.964616 | orchestrator | 2025-07-12 20:26:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:43.002140 | orchestrator | 2025-07-12 20:26:42 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:43.004560 | orchestrator | 2025-07-12 20:26:43 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:43.006140 | orchestrator | 2025-07-12 20:26:43 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:43.006732 | orchestrator | 2025-07-12 20:26:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:46.051202 | orchestrator | 2025-07-12 20:26:46 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:46.053383 | orchestrator | 2025-07-12 20:26:46 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:46.054014 | orchestrator | 2025-07-12 20:26:46 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:46.054098 | orchestrator | 2025-07-12 20:26:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:49.096568 | orchestrator | 2025-07-12 20:26:49 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:49.096647 | orchestrator | 2025-07-12 20:26:49 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:49.096920 | orchestrator | 2025-07-12 20:26:49 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:49.097038 | orchestrator | 2025-07-12 20:26:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:52.137768 | orchestrator | 2025-07-12 20:26:52 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:52.141484 | orchestrator | 2025-07-12 20:26:52 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:52.142170 | orchestrator | 2025-07-12 20:26:52 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:52.142197 | orchestrator | 2025-07-12 20:26:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:55.186008 | orchestrator | 2025-07-12 20:26:55 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:55.187497 | orchestrator | 2025-07-12 20:26:55 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:55.191030 | orchestrator | 2025-07-12 20:26:55 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:55.191101 | orchestrator | 2025-07-12 20:26:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:26:58.236277 | orchestrator | 2025-07-12 20:26:58 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:26:58.237886 | orchestrator | 2025-07-12 20:26:58 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:26:58.239333 | orchestrator | 2025-07-12 20:26:58 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:26:58.239355 | orchestrator | 2025-07-12 20:26:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:01.285790 | orchestrator | 2025-07-12 20:27:01 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:01.287169 | orchestrator | 2025-07-12 20:27:01 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:27:01.289187 | orchestrator | 2025-07-12 20:27:01 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:01.289213 | orchestrator | 2025-07-12 20:27:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:04.347682 | orchestrator | 2025-07-12 20:27:04 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:04.350387 | orchestrator | 2025-07-12 20:27:04 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:27:04.352824 | orchestrator | 2025-07-12 20:27:04 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:04.354083 | orchestrator | 2025-07-12 20:27:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:07.404555 | orchestrator | 2025-07-12 20:27:07 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:07.405538 | orchestrator | 2025-07-12 20:27:07 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:27:07.407127 | orchestrator | 2025-07-12 20:27:07 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:07.407595 | orchestrator | 2025-07-12 20:27:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:10.446572 | orchestrator | 2025-07-12 20:27:10 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:10.450412 | orchestrator | 2025-07-12 20:27:10 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:27:10.451998 | orchestrator | 2025-07-12 20:27:10 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:10.452253 | orchestrator | 2025-07-12 20:27:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:13.487930 | orchestrator | 2025-07-12 20:27:13 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:13.490921 | orchestrator | 2025-07-12 20:27:13 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:27:13.492042 | orchestrator | 2025-07-12 20:27:13 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:13.492074 | orchestrator | 2025-07-12 20:27:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:16.531703 | orchestrator | 2025-07-12 20:27:16 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:16.532731 | orchestrator | 2025-07-12 20:27:16 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state STARTED 2025-07-12 20:27:16.534252 | orchestrator | 2025-07-12 20:27:16 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:16.534303 | orchestrator | 2025-07-12 20:27:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:19.581699 | orchestrator | 2025-07-12 20:27:19 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:19.585129 | orchestrator | 2025-07-12 20:27:19 | INFO  | Task 95ab5a1b-733c-4ced-9565-ffe765db0f10 is in state SUCCESS 2025-07-12 20:27:19.586989 | orchestrator | 2025-07-12 20:27:19.587024 | orchestrator | 2025-07-12 20:27:19.587036 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-07-12 20:27:19.587047 | orchestrator | 2025-07-12 20:27:19.587058 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-07-12 20:27:19.587122 | orchestrator | Saturday 12 July 2025 20:25:11 +0000 (0:00:00.661) 0:00:00.661 ********* 2025-07-12 20:27:19.587135 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:27:19.587227 | orchestrator | 2025-07-12 20:27:19.587708 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-07-12 20:27:19.587730 | orchestrator | Saturday 12 July 2025 20:25:11 +0000 (0:00:00.710) 0:00:01.372 ********* 2025-07-12 20:27:19.587742 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.587753 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.587944 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.587957 | orchestrator | 2025-07-12 20:27:19.587969 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-07-12 20:27:19.587980 | orchestrator | Saturday 12 July 2025 20:25:12 +0000 (0:00:00.684) 0:00:02.056 ********* 2025-07-12 20:27:19.587992 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.588004 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.588015 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.588026 | orchestrator | 2025-07-12 20:27:19.588039 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-07-12 20:27:19.588051 | orchestrator | Saturday 12 July 2025 20:25:12 +0000 (0:00:00.294) 0:00:02.350 ********* 2025-07-12 20:27:19.588063 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.588074 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.588085 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.588097 | orchestrator | 2025-07-12 20:27:19.588108 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-07-12 20:27:19.588120 | orchestrator | Saturday 12 July 2025 20:25:13 +0000 (0:00:00.776) 0:00:03.127 ********* 2025-07-12 20:27:19.588132 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.588143 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.588155 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.588166 | orchestrator | 2025-07-12 20:27:19.588178 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-07-12 20:27:19.588189 | orchestrator | Saturday 12 July 2025 20:25:13 +0000 (0:00:00.334) 0:00:03.461 ********* 2025-07-12 20:27:19.588266 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.588277 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.588288 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.588299 | orchestrator | 2025-07-12 20:27:19.588310 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-07-12 20:27:19.588320 | orchestrator | Saturday 12 July 2025 20:25:14 +0000 (0:00:00.310) 0:00:03.772 ********* 2025-07-12 20:27:19.588354 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.588367 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.588377 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.588387 | orchestrator | 2025-07-12 20:27:19.588461 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-07-12 20:27:19.588474 | orchestrator | Saturday 12 July 2025 20:25:14 +0000 (0:00:00.358) 0:00:04.130 ********* 2025-07-12 20:27:19.588485 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.588496 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.588507 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.588518 | orchestrator | 2025-07-12 20:27:19.588528 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-07-12 20:27:19.588539 | orchestrator | Saturday 12 July 2025 20:25:14 +0000 (0:00:00.508) 0:00:04.639 ********* 2025-07-12 20:27:19.588550 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.588561 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.588571 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.588596 | orchestrator | 2025-07-12 20:27:19.588607 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-07-12 20:27:19.588618 | orchestrator | Saturday 12 July 2025 20:25:15 +0000 (0:00:00.310) 0:00:04.949 ********* 2025-07-12 20:27:19.588629 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 20:27:19.588640 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:27:19.588651 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:27:19.588662 | orchestrator | 2025-07-12 20:27:19.588672 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-07-12 20:27:19.588683 | orchestrator | Saturday 12 July 2025 20:25:15 +0000 (0:00:00.664) 0:00:05.614 ********* 2025-07-12 20:27:19.588694 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.588705 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.588715 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.588726 | orchestrator | 2025-07-12 20:27:19.588737 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-07-12 20:27:19.588747 | orchestrator | Saturday 12 July 2025 20:25:16 +0000 (0:00:00.451) 0:00:06.065 ********* 2025-07-12 20:27:19.588758 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 20:27:19.588769 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:27:19.588780 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:27:19.588791 | orchestrator | 2025-07-12 20:27:19.588802 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-07-12 20:27:19.588813 | orchestrator | Saturday 12 July 2025 20:25:18 +0000 (0:00:02.095) 0:00:08.161 ********* 2025-07-12 20:27:19.588824 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 20:27:19.588834 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 20:27:19.588845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 20:27:19.588856 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.588866 | orchestrator | 2025-07-12 20:27:19.588877 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-07-12 20:27:19.588925 | orchestrator | Saturday 12 July 2025 20:25:18 +0000 (0:00:00.442) 0:00:08.603 ********* 2025-07-12 20:27:19.588947 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.589988 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.590010 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.590073 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.590085 | orchestrator | 2025-07-12 20:27:19.590097 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-07-12 20:27:19.590107 | orchestrator | Saturday 12 July 2025 20:25:19 +0000 (0:00:00.796) 0:00:09.400 ********* 2025-07-12 20:27:19.590120 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.590147 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.590159 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.590170 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.590181 | orchestrator | 2025-07-12 20:27:19.590192 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-07-12 20:27:19.590202 | orchestrator | Saturday 12 July 2025 20:25:19 +0000 (0:00:00.154) 0:00:09.554 ********* 2025-07-12 20:27:19.590215 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '56d7f597ab18', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-07-12 20:25:17.107965', 'end': '2025-07-12 20:25:17.146371', 'delta': '0:00:00.038406', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['56d7f597ab18'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-07-12 20:27:19.590230 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'f2ff6bfddcef', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-07-12 20:25:17.821548', 'end': '2025-07-12 20:25:17.872493', 'delta': '0:00:00.050945', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['f2ff6bfddcef'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-07-12 20:27:19.590325 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '47937d7fb0e8', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-07-12 20:25:18.356530', 'end': '2025-07-12 20:25:18.398381', 'delta': '0:00:00.041851', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['47937d7fb0e8'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-07-12 20:27:19.590421 | orchestrator | 2025-07-12 20:27:19.590442 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-07-12 20:27:19.590459 | orchestrator | Saturday 12 July 2025 20:25:20 +0000 (0:00:00.377) 0:00:09.932 ********* 2025-07-12 20:27:19.590478 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.590496 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.590513 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.590530 | orchestrator | 2025-07-12 20:27:19.590545 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-07-12 20:27:19.590597 | orchestrator | Saturday 12 July 2025 20:25:20 +0000 (0:00:00.461) 0:00:10.394 ********* 2025-07-12 20:27:19.590616 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-07-12 20:27:19.590633 | orchestrator | 2025-07-12 20:27:19.590652 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-07-12 20:27:19.590671 | orchestrator | Saturday 12 July 2025 20:25:22 +0000 (0:00:01.649) 0:00:12.043 ********* 2025-07-12 20:27:19.590689 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.590707 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.590725 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.590741 | orchestrator | 2025-07-12 20:27:19.590757 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-07-12 20:27:19.590774 | orchestrator | Saturday 12 July 2025 20:25:22 +0000 (0:00:00.300) 0:00:12.344 ********* 2025-07-12 20:27:19.590790 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.590807 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.590824 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.590842 | orchestrator | 2025-07-12 20:27:19.590860 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 20:27:19.590878 | orchestrator | Saturday 12 July 2025 20:25:23 +0000 (0:00:00.427) 0:00:12.771 ********* 2025-07-12 20:27:19.590897 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.590914 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.590932 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.590950 | orchestrator | 2025-07-12 20:27:19.590969 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-07-12 20:27:19.590988 | orchestrator | Saturday 12 July 2025 20:25:23 +0000 (0:00:00.496) 0:00:13.268 ********* 2025-07-12 20:27:19.591007 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.591026 | orchestrator | 2025-07-12 20:27:19.591044 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-07-12 20:27:19.591063 | orchestrator | Saturday 12 July 2025 20:25:23 +0000 (0:00:00.129) 0:00:13.398 ********* 2025-07-12 20:27:19.591082 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.591101 | orchestrator | 2025-07-12 20:27:19.591120 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-07-12 20:27:19.591140 | orchestrator | Saturday 12 July 2025 20:25:24 +0000 (0:00:00.257) 0:00:13.655 ********* 2025-07-12 20:27:19.591158 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.591177 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.591195 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.591214 | orchestrator | 2025-07-12 20:27:19.591232 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-07-12 20:27:19.591250 | orchestrator | Saturday 12 July 2025 20:25:24 +0000 (0:00:00.321) 0:00:13.976 ********* 2025-07-12 20:27:19.591268 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.591286 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.591304 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.591322 | orchestrator | 2025-07-12 20:27:19.591361 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-07-12 20:27:19.591379 | orchestrator | Saturday 12 July 2025 20:25:24 +0000 (0:00:00.321) 0:00:14.298 ********* 2025-07-12 20:27:19.591396 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.591414 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.591431 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.591449 | orchestrator | 2025-07-12 20:27:19.591467 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-07-12 20:27:19.591485 | orchestrator | Saturday 12 July 2025 20:25:25 +0000 (0:00:00.534) 0:00:14.833 ********* 2025-07-12 20:27:19.591504 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.591522 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.591540 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.591556 | orchestrator | 2025-07-12 20:27:19.591584 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-07-12 20:27:19.591602 | orchestrator | Saturday 12 July 2025 20:25:25 +0000 (0:00:00.341) 0:00:15.174 ********* 2025-07-12 20:27:19.591620 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.591639 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.591657 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.591675 | orchestrator | 2025-07-12 20:27:19.591693 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-07-12 20:27:19.591711 | orchestrator | Saturday 12 July 2025 20:25:25 +0000 (0:00:00.329) 0:00:15.504 ********* 2025-07-12 20:27:19.591728 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.591745 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.591762 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.591778 | orchestrator | 2025-07-12 20:27:19.591795 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-07-12 20:27:19.591880 | orchestrator | Saturday 12 July 2025 20:25:26 +0000 (0:00:00.326) 0:00:15.831 ********* 2025-07-12 20:27:19.591900 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.591917 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.591934 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.591951 | orchestrator | 2025-07-12 20:27:19.591978 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-07-12 20:27:19.591995 | orchestrator | Saturday 12 July 2025 20:25:26 +0000 (0:00:00.525) 0:00:16.356 ********* 2025-07-12 20:27:19.592014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a733058e--5b74--5553--b3bf--66d1cbf46d31-osd--block--a733058e--5b74--5553--b3bf--66d1cbf46d31', 'dm-uuid-LVM-LRBCVsAuQ4NYflbyU4pf0eP05SUfKllFaKERMg5N4jfaILvyunRxXIrcd5Q5Pt52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592034 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d632655--ba67--5245--89a0--0cb971b00289-osd--block--8d632655--ba67--5245--89a0--0cb971b00289', 'dm-uuid-LVM-3UXIhqn3wzLYuFvUWcZP6rcvoyj26863wapNWwVMrWeewxCuHJKeNf5YRrv83XX5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592120 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0', 'scsi-SQEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0-part1', 'scsi-SQEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592324 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a733058e--5b74--5553--b3bf--66d1cbf46d31-osd--block--a733058e--5b74--5553--b3bf--66d1cbf46d31'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oXYt9l-zYKn-vfrZ-WuMo-ABm2-vvvj-AvKrQB', 'scsi-0QEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1', 'scsi-SQEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--8d632655--ba67--5245--89a0--0cb971b00289-osd--block--8d632655--ba67--5245--89a0--0cb971b00289'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BcF1zn-BnUo-Islx-uRhu-9F36-gT48-C6uyQB', 'scsi-0QEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a', 'scsi-SQEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2', 'scsi-SQEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592499 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2ea885c--c09d--528a--8e30--9d64ecae89b3-osd--block--c2ea885c--c09d--528a--8e30--9d64ecae89b3', 'dm-uuid-LVM-eceZmWe6OR1E2fwKczuSYydrQhn8MZPRD9CNbWWHLbVocXD2HfKLhNJrURaTYB2l'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592524 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5037a2b3--768c--53ee--9f72--df4915d4fb6f-osd--block--5037a2b3--768c--53ee--9f72--df4915d4fb6f', 'dm-uuid-LVM-dTgv11CN0erm79ZzAiH5PP2f99pdpaj35eJpv4pXG1yMce0lvQ11QBsEbBMmsfDu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592534 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592544 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592572 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.592582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592618 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592638 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c', 'scsi-SQEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c2ea885c--c09d--528a--8e30--9d64ecae89b3-osd--block--c2ea885c--c09d--528a--8e30--9d64ecae89b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gPOHnc-FYBr-qF2E-WZSS-16C9-ZJrz-QQ3ji2', 'scsi-0QEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c', 'scsi-SQEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5037a2b3--768c--53ee--9f72--df4915d4fb6f-osd--block--5037a2b3--768c--53ee--9f72--df4915d4fb6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cK655S-Nnlc-nB0c-l8ma-s1bR-vBMZ-tbphde', 'scsi-0QEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5', 'scsi-SQEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592709 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3d06229f--4e10--52c4--b396--8cb508609dff-osd--block--3d06229f--4e10--52c4--b396--8cb508609dff', 'dm-uuid-LVM-SEvrrBUsOXsdHRPsgBOMCYdYCHW0QRZTuSP5eWfnVSAnOZj74dOgbCxEA9w4bH0E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808', 'scsi-SQEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592749 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--81820e8a--af8a--5909--b466--981a4bed2414-osd--block--81820e8a--af8a--5909--b466--981a4bed2414', 'dm-uuid-LVM-Vxecnppb2BKZw0ce7eQ0jWxT7TNCX9gURk3jwgF0EUQNBGKug81YnrkDpxAK1m14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592759 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592776 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.592786 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592796 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592857 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-07-12 20:27:19.592878 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf', 'scsi-SQEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--3d06229f--4e10--52c4--b396--8cb508609dff-osd--block--3d06229f--4e10--52c4--b396--8cb508609dff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u9ZO1C-cVZT-KDbP-pCfQ-8lwy-mE9f-DG554V', 'scsi-0QEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4', 'scsi-SQEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592905 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--81820e8a--af8a--5909--b466--981a4bed2414-osd--block--81820e8a--af8a--5909--b466--981a4bed2414'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DJ9JLp-MG6W-EyaC-SY2P-58v9-0aUr-JN9DFN', 'scsi-0QEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346', 'scsi-SQEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592925 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6', 'scsi-SQEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592936 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-07-12 20:27:19.592946 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.592956 | orchestrator | 2025-07-12 20:27:19.592966 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-07-12 20:27:19.592976 | orchestrator | Saturday 12 July 2025 20:25:27 +0000 (0:00:00.620) 0:00:16.977 ********* 2025-07-12 20:27:19.592987 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a733058e--5b74--5553--b3bf--66d1cbf46d31-osd--block--a733058e--5b74--5553--b3bf--66d1cbf46d31', 'dm-uuid-LVM-LRBCVsAuQ4NYflbyU4pf0eP05SUfKllFaKERMg5N4jfaILvyunRxXIrcd5Q5Pt52'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593003 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--8d632655--ba67--5245--89a0--0cb971b00289-osd--block--8d632655--ba67--5245--89a0--0cb971b00289', 'dm-uuid-LVM-3UXIhqn3wzLYuFvUWcZP6rcvoyj26863wapNWwVMrWeewxCuHJKeNf5YRrv83XX5'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593013 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593023 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593043 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593054 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593065 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593080 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593091 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593102 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593122 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2ea885c--c09d--528a--8e30--9d64ecae89b3-osd--block--c2ea885c--c09d--528a--8e30--9d64ecae89b3', 'dm-uuid-LVM-eceZmWe6OR1E2fwKczuSYydrQhn8MZPRD9CNbWWHLbVocXD2HfKLhNJrURaTYB2l'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593133 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0', 'scsi-SQEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0-part1', 'scsi-SQEMU_QEMU_HARDDISK_5410106d-ed3b-4664-9779-6ad1cc9646b0-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593150 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5037a2b3--768c--53ee--9f72--df4915d4fb6f-osd--block--5037a2b3--768c--53ee--9f72--df4915d4fb6f', 'dm-uuid-LVM-dTgv11CN0erm79ZzAiH5PP2f99pdpaj35eJpv4pXG1yMce0lvQ11QBsEbBMmsfDu'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593171 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--a733058e--5b74--5553--b3bf--66d1cbf46d31-osd--block--a733058e--5b74--5553--b3bf--66d1cbf46d31'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-oXYt9l-zYKn-vfrZ-WuMo-ABm2-vvvj-AvKrQB', 'scsi-0QEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1', 'scsi-SQEMU_QEMU_HARDDISK_47b67cf6-6134-4ebc-b4bd-75f5912c51d1'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593183 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593203 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--8d632655--ba67--5245--89a0--0cb971b00289-osd--block--8d632655--ba67--5245--89a0--0cb971b00289'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-BcF1zn-BnUo-Islx-uRhu-9F36-gT48-C6uyQB', 'scsi-0QEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a', 'scsi-SQEMU_QEMU_HARDDISK_e02eada2-9691-4994-b44c-0b327a73be9a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593214 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593230 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2', 'scsi-SQEMU_QEMU_HARDDISK_fe3c3c4e-2b96-4bec-8093-d77b3db985a2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593240 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593250 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-44-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593266 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593280 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593291 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593309 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593319 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593350 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c', 'scsi-SQEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c-part1', 'scsi-SQEMU_QEMU_HARDDISK_956b92a8-e2a8-4c28-b21e-590538c1fc3c-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593385 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c2ea885c--c09d--528a--8e30--9d64ecae89b3-osd--block--c2ea885c--c09d--528a--8e30--9d64ecae89b3'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-gPOHnc-FYBr-qF2E-WZSS-16C9-ZJrz-QQ3ji2', 'scsi-0QEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c', 'scsi-SQEMU_QEMU_HARDDISK_cbc49688-9ad7-4fd0-a52c-a19b0583b25c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593397 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5037a2b3--768c--53ee--9f72--df4915d4fb6f-osd--block--5037a2b3--768c--53ee--9f72--df4915d4fb6f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-cK655S-Nnlc-nB0c-l8ma-s1bR-vBMZ-tbphde', 'scsi-0QEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5', 'scsi-SQEMU_QEMU_HARDDISK_1d5b9d5f-7727-4753-bdb1-c3a309291ad5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593413 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.593423 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808', 'scsi-SQEMU_QEMU_HARDDISK_736d04ae-95cc-4835-aff1-6fbe44d77808'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593433 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-41-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593443 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.593453 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--3d06229f--4e10--52c4--b396--8cb508609dff-osd--block--3d06229f--4e10--52c4--b396--8cb508609dff', 'dm-uuid-LVM-SEvrrBUsOXsdHRPsgBOMCYdYCHW0QRZTuSP5eWfnVSAnOZj74dOgbCxEA9w4bH0E'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593486 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--81820e8a--af8a--5909--b466--981a4bed2414-osd--block--81820e8a--af8a--5909--b466--981a4bed2414', 'dm-uuid-LVM-Vxecnppb2BKZw0ce7eQ0jWxT7TNCX9gURk3jwgF0EUQNBGKug81YnrkDpxAK1m14'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593503 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593513 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593523 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593533 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593543 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593560 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593604 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593615 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593626 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf', 'scsi-SQEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf-part1', 'scsi-SQEMU_QEMU_HARDDISK_a9eb58a9-7a8d-4884-8549-7422e45233bf-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['eb4c1330-a351-47ae-b39f-6c88d500daef']}, 'sectors': 167770079, 'sectorsize': 512, 'size': '80.00 GB', 'start': '2048', 'uuid': 'eb4c1330-a351-47ae-b39f-6c88d500daef'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593637 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--3d06229f--4e10--52c4--b396--8cb508609dff-osd--block--3d06229f--4e10--52c4--b396--8cb508609dff'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-u9ZO1C-cVZT-KDbP-pCfQ-8lwy-mE9f-DG554V', 'scsi-0QEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4', 'scsi-SQEMU_QEMU_HARDDISK_9f08906f-6338-431f-a878-f727643915a4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593657 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--81820e8a--af8a--5909--b466--981a4bed2414-osd--block--81820e8a--af8a--5909--b466--981a4bed2414'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-DJ9JLp-MG6W-EyaC-SY2P-58v9-0aUr-JN9DFN', 'scsi-0QEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346', 'scsi-SQEMU_QEMU_HARDDISK_1628f950-5804-44ef-9d42-f709daecc346'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593674 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6', 'scsi-SQEMU_QEMU_HARDDISK_d5652225-c6ef-49dc-a608-4c92c2a71dd6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593684 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-07-12-19-42-38-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-07-12 20:27:19.593694 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.593704 | orchestrator | 2025-07-12 20:27:19.593714 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-07-12 20:27:19.593724 | orchestrator | Saturday 12 July 2025 20:25:28 +0000 (0:00:00.700) 0:00:17.677 ********* 2025-07-12 20:27:19.593733 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.593743 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.593753 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.593762 | orchestrator | 2025-07-12 20:27:19.593772 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-07-12 20:27:19.593781 | orchestrator | Saturday 12 July 2025 20:25:28 +0000 (0:00:00.668) 0:00:18.345 ********* 2025-07-12 20:27:19.593791 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.593803 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.593819 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.593835 | orchestrator | 2025-07-12 20:27:19.593850 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 20:27:19.593866 | orchestrator | Saturday 12 July 2025 20:25:29 +0000 (0:00:00.502) 0:00:18.847 ********* 2025-07-12 20:27:19.593881 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.593897 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.593914 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.593931 | orchestrator | 2025-07-12 20:27:19.593947 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 20:27:19.593964 | orchestrator | Saturday 12 July 2025 20:25:29 +0000 (0:00:00.678) 0:00:19.526 ********* 2025-07-12 20:27:19.593974 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.593984 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.593993 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.594003 | orchestrator | 2025-07-12 20:27:19.594012 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-07-12 20:27:19.594077 | orchestrator | Saturday 12 July 2025 20:25:30 +0000 (0:00:00.304) 0:00:19.830 ********* 2025-07-12 20:27:19.594095 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.594105 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.594114 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.594124 | orchestrator | 2025-07-12 20:27:19.594133 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-07-12 20:27:19.594143 | orchestrator | Saturday 12 July 2025 20:25:30 +0000 (0:00:00.437) 0:00:20.268 ********* 2025-07-12 20:27:19.594152 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.594161 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.594171 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.594180 | orchestrator | 2025-07-12 20:27:19.594190 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-07-12 20:27:19.594199 | orchestrator | Saturday 12 July 2025 20:25:31 +0000 (0:00:00.502) 0:00:20.770 ********* 2025-07-12 20:27:19.594209 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-07-12 20:27:19.594218 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-07-12 20:27:19.594228 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-07-12 20:27:19.594238 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-07-12 20:27:19.594255 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-07-12 20:27:19.594265 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-07-12 20:27:19.594274 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-07-12 20:27:19.594284 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-07-12 20:27:19.594298 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-07-12 20:27:19.594308 | orchestrator | 2025-07-12 20:27:19.594317 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-07-12 20:27:19.594327 | orchestrator | Saturday 12 July 2025 20:25:32 +0000 (0:00:00.872) 0:00:21.642 ********* 2025-07-12 20:27:19.594363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-07-12 20:27:19.594373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-07-12 20:27:19.594383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-07-12 20:27:19.594392 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.594402 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-07-12 20:27:19.594411 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-07-12 20:27:19.594421 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-07-12 20:27:19.594430 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.594440 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-07-12 20:27:19.594449 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-07-12 20:27:19.594459 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-07-12 20:27:19.594468 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.594478 | orchestrator | 2025-07-12 20:27:19.594487 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-07-12 20:27:19.594497 | orchestrator | Saturday 12 July 2025 20:25:32 +0000 (0:00:00.344) 0:00:21.987 ********* 2025-07-12 20:27:19.594507 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:27:19.594517 | orchestrator | 2025-07-12 20:27:19.594527 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-07-12 20:27:19.594546 | orchestrator | Saturday 12 July 2025 20:25:33 +0000 (0:00:00.711) 0:00:22.698 ********* 2025-07-12 20:27:19.594562 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.594578 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.594593 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.594609 | orchestrator | 2025-07-12 20:27:19.594627 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-07-12 20:27:19.594646 | orchestrator | Saturday 12 July 2025 20:25:33 +0000 (0:00:00.328) 0:00:23.027 ********* 2025-07-12 20:27:19.594695 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.594707 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.594716 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.594726 | orchestrator | 2025-07-12 20:27:19.594735 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-07-12 20:27:19.594745 | orchestrator | Saturday 12 July 2025 20:25:33 +0000 (0:00:00.341) 0:00:23.369 ********* 2025-07-12 20:27:19.594754 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.594763 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.594773 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:27:19.594782 | orchestrator | 2025-07-12 20:27:19.594791 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-07-12 20:27:19.594801 | orchestrator | Saturday 12 July 2025 20:25:34 +0000 (0:00:00.335) 0:00:23.704 ********* 2025-07-12 20:27:19.594810 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.594820 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.594829 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.594838 | orchestrator | 2025-07-12 20:27:19.594848 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-07-12 20:27:19.594857 | orchestrator | Saturday 12 July 2025 20:25:34 +0000 (0:00:00.652) 0:00:24.357 ********* 2025-07-12 20:27:19.594867 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:27:19.594876 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:27:19.594885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:27:19.594895 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.594904 | orchestrator | 2025-07-12 20:27:19.594913 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-07-12 20:27:19.594923 | orchestrator | Saturday 12 July 2025 20:25:35 +0000 (0:00:00.375) 0:00:24.732 ********* 2025-07-12 20:27:19.594932 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:27:19.594941 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:27:19.594951 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:27:19.594960 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.594970 | orchestrator | 2025-07-12 20:27:19.594979 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-07-12 20:27:19.594989 | orchestrator | Saturday 12 July 2025 20:25:35 +0000 (0:00:00.367) 0:00:25.100 ********* 2025-07-12 20:27:19.594998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-07-12 20:27:19.595007 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-07-12 20:27:19.595016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-07-12 20:27:19.595026 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.595035 | orchestrator | 2025-07-12 20:27:19.595044 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-07-12 20:27:19.595054 | orchestrator | Saturday 12 July 2025 20:25:35 +0000 (0:00:00.367) 0:00:25.467 ********* 2025-07-12 20:27:19.595063 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:27:19.595073 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:27:19.595082 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:27:19.595091 | orchestrator | 2025-07-12 20:27:19.595101 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-07-12 20:27:19.595117 | orchestrator | Saturday 12 July 2025 20:25:36 +0000 (0:00:00.332) 0:00:25.799 ********* 2025-07-12 20:27:19.595128 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-07-12 20:27:19.595137 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-07-12 20:27:19.595146 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-07-12 20:27:19.595156 | orchestrator | 2025-07-12 20:27:19.595170 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-07-12 20:27:19.595180 | orchestrator | Saturday 12 July 2025 20:25:36 +0000 (0:00:00.547) 0:00:26.347 ********* 2025-07-12 20:27:19.595189 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 20:27:19.595204 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:27:19.595214 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:27:19.595224 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-12 20:27:19.595233 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 20:27:19.595243 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 20:27:19.595252 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 20:27:19.595261 | orchestrator | 2025-07-12 20:27:19.595271 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-07-12 20:27:19.595280 | orchestrator | Saturday 12 July 2025 20:25:37 +0000 (0:00:01.016) 0:00:27.364 ********* 2025-07-12 20:27:19.595290 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-07-12 20:27:19.595299 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-07-12 20:27:19.595308 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-07-12 20:27:19.595318 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-07-12 20:27:19.595327 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-07-12 20:27:19.595358 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-07-12 20:27:19.595368 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-07-12 20:27:19.595378 | orchestrator | 2025-07-12 20:27:19.595387 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-07-12 20:27:19.595397 | orchestrator | Saturday 12 July 2025 20:25:39 +0000 (0:00:02.060) 0:00:29.425 ********* 2025-07-12 20:27:19.595406 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:27:19.595416 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:27:19.595425 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-07-12 20:27:19.595435 | orchestrator | 2025-07-12 20:27:19.595444 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-07-12 20:27:19.595454 | orchestrator | Saturday 12 July 2025 20:25:40 +0000 (0:00:00.393) 0:00:29.819 ********* 2025-07-12 20:27:19.595464 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:27:19.595475 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:27:19.595485 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:27:19.595495 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:27:19.595505 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-07-12 20:27:19.595523 | orchestrator | 2025-07-12 20:27:19.595532 | orchestrator | TASK [generate keys] *********************************************************** 2025-07-12 20:27:19.595542 | orchestrator | Saturday 12 July 2025 20:26:25 +0000 (0:00:45.787) 0:01:15.607 ********* 2025-07-12 20:27:19.595552 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595561 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595576 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595586 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595596 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595618 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595636 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-07-12 20:27:19.595654 | orchestrator | 2025-07-12 20:27:19.595672 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-07-12 20:27:19.595689 | orchestrator | Saturday 12 July 2025 20:26:49 +0000 (0:00:23.505) 0:01:39.112 ********* 2025-07-12 20:27:19.595706 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595716 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595725 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595734 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595744 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595753 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595762 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-07-12 20:27:19.595772 | orchestrator | 2025-07-12 20:27:19.595781 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-07-12 20:27:19.595791 | orchestrator | Saturday 12 July 2025 20:27:01 +0000 (0:00:12.005) 0:01:51.118 ********* 2025-07-12 20:27:19.595800 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595809 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:27:19.595819 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:27:19.595828 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595838 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:27:19.595847 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:27:19.595856 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595866 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:27:19.595875 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:27:19.595884 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595894 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:27:19.595903 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:27:19.595912 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595922 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:27:19.595931 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:27:19.595946 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-07-12 20:27:19.595956 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-07-12 20:27:19.595965 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-07-12 20:27:19.595974 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-07-12 20:27:19.595984 | orchestrator | 2025-07-12 20:27:19.595993 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:27:19.596003 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-07-12 20:27:19.596013 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 20:27:19.596023 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-07-12 20:27:19.596032 | orchestrator | 2025-07-12 20:27:19.596042 | orchestrator | 2025-07-12 20:27:19.596051 | orchestrator | 2025-07-12 20:27:19.596060 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:27:19.596070 | orchestrator | Saturday 12 July 2025 20:27:17 +0000 (0:00:16.337) 0:02:07.455 ********* 2025-07-12 20:27:19.596079 | orchestrator | =============================================================================== 2025-07-12 20:27:19.596089 | orchestrator | create openstack pool(s) ----------------------------------------------- 45.79s 2025-07-12 20:27:19.596098 | orchestrator | generate keys ---------------------------------------------------------- 23.51s 2025-07-12 20:27:19.596107 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.34s 2025-07-12 20:27:19.596117 | orchestrator | get keys from monitors ------------------------------------------------- 12.01s 2025-07-12 20:27:19.596126 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.10s 2025-07-12 20:27:19.596135 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.06s 2025-07-12 20:27:19.596150 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.65s 2025-07-12 20:27:19.596160 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.02s 2025-07-12 20:27:19.596170 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.87s 2025-07-12 20:27:19.596184 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.80s 2025-07-12 20:27:19.596193 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.78s 2025-07-12 20:27:19.596203 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.71s 2025-07-12 20:27:19.596212 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.71s 2025-07-12 20:27:19.596221 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.70s 2025-07-12 20:27:19.596230 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.68s 2025-07-12 20:27:19.596240 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.68s 2025-07-12 20:27:19.596249 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.67s 2025-07-12 20:27:19.596258 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.66s 2025-07-12 20:27:19.596267 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.65s 2025-07-12 20:27:19.596277 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.62s 2025-07-12 20:27:19.596286 | orchestrator | 2025-07-12 20:27:19 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:19.596296 | orchestrator | 2025-07-12 20:27:19 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state STARTED 2025-07-12 20:27:19.596306 | orchestrator | 2025-07-12 20:27:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:22.640331 | orchestrator | 2025-07-12 20:27:22 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:22.642147 | orchestrator | 2025-07-12 20:27:22 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:22.643885 | orchestrator | 2025-07-12 20:27:22 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state STARTED 2025-07-12 20:27:22.643949 | orchestrator | 2025-07-12 20:27:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:25.694700 | orchestrator | 2025-07-12 20:27:25 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:25.695677 | orchestrator | 2025-07-12 20:27:25 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:25.697513 | orchestrator | 2025-07-12 20:27:25 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state STARTED 2025-07-12 20:27:25.697598 | orchestrator | 2025-07-12 20:27:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:28.750119 | orchestrator | 2025-07-12 20:27:28 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:28.750875 | orchestrator | 2025-07-12 20:27:28 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:28.751979 | orchestrator | 2025-07-12 20:27:28 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state STARTED 2025-07-12 20:27:28.754719 | orchestrator | 2025-07-12 20:27:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:31.811920 | orchestrator | 2025-07-12 20:27:31 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:31.814419 | orchestrator | 2025-07-12 20:27:31 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:31.816535 | orchestrator | 2025-07-12 20:27:31 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state STARTED 2025-07-12 20:27:31.816702 | orchestrator | 2025-07-12 20:27:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:34.870836 | orchestrator | 2025-07-12 20:27:34 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:34.874537 | orchestrator | 2025-07-12 20:27:34 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:34.875881 | orchestrator | 2025-07-12 20:27:34 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state STARTED 2025-07-12 20:27:34.876704 | orchestrator | 2025-07-12 20:27:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:37.927074 | orchestrator | 2025-07-12 20:27:37 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:37.929066 | orchestrator | 2025-07-12 20:27:37 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:37.932214 | orchestrator | 2025-07-12 20:27:37 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state STARTED 2025-07-12 20:27:37.932292 | orchestrator | 2025-07-12 20:27:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:40.988825 | orchestrator | 2025-07-12 20:27:40 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:40.991230 | orchestrator | 2025-07-12 20:27:40 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:40.992922 | orchestrator | 2025-07-12 20:27:40 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state STARTED 2025-07-12 20:27:40.992951 | orchestrator | 2025-07-12 20:27:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:44.044897 | orchestrator | 2025-07-12 20:27:44 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:44.046217 | orchestrator | 2025-07-12 20:27:44 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:44.047927 | orchestrator | 2025-07-12 20:27:44 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state STARTED 2025-07-12 20:27:44.047972 | orchestrator | 2025-07-12 20:27:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:47.103941 | orchestrator | 2025-07-12 20:27:47 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:47.106008 | orchestrator | 2025-07-12 20:27:47 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:47.107660 | orchestrator | 2025-07-12 20:27:47 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state STARTED 2025-07-12 20:27:47.107904 | orchestrator | 2025-07-12 20:27:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:50.159691 | orchestrator | 2025-07-12 20:27:50 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:27:50.161272 | orchestrator | 2025-07-12 20:27:50 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:50.163922 | orchestrator | 2025-07-12 20:27:50 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:50.165478 | orchestrator | 2025-07-12 20:27:50 | INFO  | Task 152381e5-8f27-4596-ba94-6bd90283fe48 is in state SUCCESS 2025-07-12 20:27:50.165651 | orchestrator | 2025-07-12 20:27:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:53.221382 | orchestrator | 2025-07-12 20:27:53 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:27:53.225180 | orchestrator | 2025-07-12 20:27:53 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:53.229230 | orchestrator | 2025-07-12 20:27:53 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:53.229281 | orchestrator | 2025-07-12 20:27:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:56.283551 | orchestrator | 2025-07-12 20:27:56 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:27:56.285571 | orchestrator | 2025-07-12 20:27:56 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:56.288302 | orchestrator | 2025-07-12 20:27:56 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:56.288921 | orchestrator | 2025-07-12 20:27:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:27:59.334139 | orchestrator | 2025-07-12 20:27:59 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:27:59.335135 | orchestrator | 2025-07-12 20:27:59 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state STARTED 2025-07-12 20:27:59.336924 | orchestrator | 2025-07-12 20:27:59 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:27:59.336964 | orchestrator | 2025-07-12 20:27:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:02.391789 | orchestrator | 2025-07-12 20:28:02 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:02.394337 | orchestrator | 2025-07-12 20:28:02 | INFO  | Task a6809f7c-0652-4994-9725-8359b7202a44 is in state SUCCESS 2025-07-12 20:28:02.395271 | orchestrator | 2025-07-12 20:28:02.395306 | orchestrator | 2025-07-12 20:28:02.395319 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-07-12 20:28:02.395331 | orchestrator | 2025-07-12 20:28:02.395342 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-07-12 20:28:02.395411 | orchestrator | Saturday 12 July 2025 20:27:21 +0000 (0:00:00.152) 0:00:00.152 ********* 2025-07-12 20:28:02.395423 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-07-12 20:28:02.395436 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 20:28:02.395448 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 20:28:02.395458 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:28:02.395619 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-07-12 20:28:02.395636 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-07-12 20:28:02.395647 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-07-12 20:28:02.395658 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-07-12 20:28:02.395668 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-07-12 20:28:02.395679 | orchestrator | 2025-07-12 20:28:02.395690 | orchestrator | TASK [Create share directory] ************************************************** 2025-07-12 20:28:02.395701 | orchestrator | Saturday 12 July 2025 20:27:25 +0000 (0:00:04.020) 0:00:04.172 ********* 2025-07-12 20:28:02.395896 | orchestrator | changed: [testbed-manager -> localhost] 2025-07-12 20:28:02.395909 | orchestrator | 2025-07-12 20:28:02.395921 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-07-12 20:28:02.395932 | orchestrator | Saturday 12 July 2025 20:27:26 +0000 (0:00:01.054) 0:00:05.227 ********* 2025-07-12 20:28:02.395943 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-07-12 20:28:02.395954 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 20:28:02.395965 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 20:28:02.395975 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:28:02.395986 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-07-12 20:28:02.395997 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-07-12 20:28:02.396008 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-07-12 20:28:02.396018 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-07-12 20:28:02.396029 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-07-12 20:28:02.396040 | orchestrator | 2025-07-12 20:28:02.396050 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-07-12 20:28:02.396061 | orchestrator | Saturday 12 July 2025 20:27:40 +0000 (0:00:13.593) 0:00:18.820 ********* 2025-07-12 20:28:02.396073 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-07-12 20:28:02.396084 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 20:28:02.396094 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 20:28:02.396105 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:28:02.396116 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-07-12 20:28:02.396128 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-07-12 20:28:02.396139 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-07-12 20:28:02.396149 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-07-12 20:28:02.396160 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-07-12 20:28:02.396184 | orchestrator | 2025-07-12 20:28:02.396196 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:28:02.396207 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:28:02.396218 | orchestrator | 2025-07-12 20:28:02.396229 | orchestrator | 2025-07-12 20:28:02.396240 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:28:02.396251 | orchestrator | Saturday 12 July 2025 20:27:47 +0000 (0:00:06.933) 0:00:25.753 ********* 2025-07-12 20:28:02.396262 | orchestrator | =============================================================================== 2025-07-12 20:28:02.396273 | orchestrator | Write ceph keys to the share directory --------------------------------- 13.59s 2025-07-12 20:28:02.396284 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.93s 2025-07-12 20:28:02.396294 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.02s 2025-07-12 20:28:02.396305 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2025-07-12 20:28:02.396315 | orchestrator | 2025-07-12 20:28:02.396326 | orchestrator | 2025-07-12 20:28:02.396337 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:28:02.396382 | orchestrator | 2025-07-12 20:28:02.396406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:28:02.396417 | orchestrator | Saturday 12 July 2025 20:26:16 +0000 (0:00:00.353) 0:00:00.353 ********* 2025-07-12 20:28:02.396428 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.396439 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.396450 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.396461 | orchestrator | 2025-07-12 20:28:02.396472 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:28:02.396482 | orchestrator | Saturday 12 July 2025 20:26:17 +0000 (0:00:00.326) 0:00:00.680 ********* 2025-07-12 20:28:02.396493 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-07-12 20:28:02.396505 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-07-12 20:28:02.396518 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-07-12 20:28:02.396531 | orchestrator | 2025-07-12 20:28:02.396543 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-07-12 20:28:02.396555 | orchestrator | 2025-07-12 20:28:02.396575 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 20:28:02.396588 | orchestrator | Saturday 12 July 2025 20:26:17 +0000 (0:00:00.447) 0:00:01.127 ********* 2025-07-12 20:28:02.396600 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:28:02.396612 | orchestrator | 2025-07-12 20:28:02.396625 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-07-12 20:28:02.396637 | orchestrator | Saturday 12 July 2025 20:26:18 +0000 (0:00:00.555) 0:00:01.683 ********* 2025-07-12 20:28:02.396657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:28:02.396705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:28:02.396723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:28:02.396744 | orchestrator | 2025-07-12 20:28:02.396757 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-07-12 20:28:02.396771 | orchestrator | Saturday 12 July 2025 20:26:19 +0000 (0:00:01.283) 0:00:02.966 ********* 2025-07-12 20:28:02.396783 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.396796 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.396808 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.396820 | orchestrator | 2025-07-12 20:28:02.396831 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 20:28:02.396842 | orchestrator | Saturday 12 July 2025 20:26:20 +0000 (0:00:00.463) 0:00:03.429 ********* 2025-07-12 20:28:02.396954 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 20:28:02.396977 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 20:28:02.396989 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 20:28:02.396999 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 20:28:02.397010 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 20:28:02.397021 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 20:28:02.397031 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-07-12 20:28:02.397042 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 20:28:02.397053 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 20:28:02.397070 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 20:28:02.397081 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 20:28:02.397091 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 20:28:02.397102 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 20:28:02.397112 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 20:28:02.397123 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-07-12 20:28:02.397133 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 20:28:02.397144 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-07-12 20:28:02.397162 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-07-12 20:28:02.397173 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-07-12 20:28:02.397184 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-07-12 20:28:02.397195 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-07-12 20:28:02.397205 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-07-12 20:28:02.397216 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-07-12 20:28:02.397226 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-07-12 20:28:02.397238 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-07-12 20:28:02.397251 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-07-12 20:28:02.397262 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-07-12 20:28:02.397273 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-07-12 20:28:02.397283 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-07-12 20:28:02.397294 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-07-12 20:28:02.397305 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-07-12 20:28:02.397315 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-07-12 20:28:02.397326 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-07-12 20:28:02.397503 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-07-12 20:28:02.397524 | orchestrator | 2025-07-12 20:28:02.397535 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:28:02.397546 | orchestrator | Saturday 12 July 2025 20:26:20 +0000 (0:00:00.744) 0:00:04.174 ********* 2025-07-12 20:28:02.397556 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.397567 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.397578 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.397588 | orchestrator | 2025-07-12 20:28:02.397599 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:28:02.397608 | orchestrator | Saturday 12 July 2025 20:26:21 +0000 (0:00:00.280) 0:00:04.454 ********* 2025-07-12 20:28:02.397618 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.397627 | orchestrator | 2025-07-12 20:28:02.397644 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:28:02.397655 | orchestrator | Saturday 12 July 2025 20:26:21 +0000 (0:00:00.120) 0:00:04.575 ********* 2025-07-12 20:28:02.397664 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.397674 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.397683 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.397693 | orchestrator | 2025-07-12 20:28:02.397702 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:28:02.397720 | orchestrator | Saturday 12 July 2025 20:26:21 +0000 (0:00:00.405) 0:00:04.980 ********* 2025-07-12 20:28:02.397730 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.397739 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.397749 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.397758 | orchestrator | 2025-07-12 20:28:02.397767 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:28:02.397777 | orchestrator | Saturday 12 July 2025 20:26:21 +0000 (0:00:00.268) 0:00:05.249 ********* 2025-07-12 20:28:02.397786 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.397796 | orchestrator | 2025-07-12 20:28:02.397811 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:28:02.397821 | orchestrator | Saturday 12 July 2025 20:26:21 +0000 (0:00:00.118) 0:00:05.368 ********* 2025-07-12 20:28:02.397830 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.397839 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.397849 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.397858 | orchestrator | 2025-07-12 20:28:02.397867 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:28:02.397877 | orchestrator | Saturday 12 July 2025 20:26:22 +0000 (0:00:00.269) 0:00:05.638 ********* 2025-07-12 20:28:02.397886 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.397896 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.397905 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.397914 | orchestrator | 2025-07-12 20:28:02.397924 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:28:02.397933 | orchestrator | Saturday 12 July 2025 20:26:22 +0000 (0:00:00.271) 0:00:05.910 ********* 2025-07-12 20:28:02.397943 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.397952 | orchestrator | 2025-07-12 20:28:02.397962 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:28:02.397971 | orchestrator | Saturday 12 July 2025 20:26:22 +0000 (0:00:00.263) 0:00:06.173 ********* 2025-07-12 20:28:02.397980 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.397989 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.397999 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.398008 | orchestrator | 2025-07-12 20:28:02.398062 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:28:02.398073 | orchestrator | Saturday 12 July 2025 20:26:23 +0000 (0:00:00.282) 0:00:06.455 ********* 2025-07-12 20:28:02.398083 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.398092 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.398102 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.398111 | orchestrator | 2025-07-12 20:28:02.398121 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:28:02.398131 | orchestrator | Saturday 12 July 2025 20:26:23 +0000 (0:00:00.294) 0:00:06.750 ********* 2025-07-12 20:28:02.398142 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.398153 | orchestrator | 2025-07-12 20:28:02.398165 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:28:02.398176 | orchestrator | Saturday 12 July 2025 20:26:23 +0000 (0:00:00.119) 0:00:06.870 ********* 2025-07-12 20:28:02.398187 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.398198 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.398209 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.398220 | orchestrator | 2025-07-12 20:28:02.398231 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:28:02.398242 | orchestrator | Saturday 12 July 2025 20:26:23 +0000 (0:00:00.244) 0:00:07.114 ********* 2025-07-12 20:28:02.398253 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.398264 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.398275 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.398286 | orchestrator | 2025-07-12 20:28:02.398297 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:28:02.398377 | orchestrator | Saturday 12 July 2025 20:26:24 +0000 (0:00:00.447) 0:00:07.561 ********* 2025-07-12 20:28:02.398391 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.398402 | orchestrator | 2025-07-12 20:28:02.398413 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:28:02.398424 | orchestrator | Saturday 12 July 2025 20:26:24 +0000 (0:00:00.149) 0:00:07.711 ********* 2025-07-12 20:28:02.398436 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.398447 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.398458 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.398469 | orchestrator | 2025-07-12 20:28:02.398480 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:28:02.398492 | orchestrator | Saturday 12 July 2025 20:26:24 +0000 (0:00:00.265) 0:00:07.977 ********* 2025-07-12 20:28:02.398502 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.398511 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.398521 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.398530 | orchestrator | 2025-07-12 20:28:02.398540 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:28:02.398549 | orchestrator | Saturday 12 July 2025 20:26:24 +0000 (0:00:00.291) 0:00:08.268 ********* 2025-07-12 20:28:02.398559 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.398568 | orchestrator | 2025-07-12 20:28:02.398578 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:28:02.398587 | orchestrator | Saturday 12 July 2025 20:26:24 +0000 (0:00:00.111) 0:00:08.380 ********* 2025-07-12 20:28:02.398596 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.398606 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.398615 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.398625 | orchestrator | 2025-07-12 20:28:02.398634 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:28:02.398644 | orchestrator | Saturday 12 July 2025 20:26:25 +0000 (0:00:00.419) 0:00:08.799 ********* 2025-07-12 20:28:02.398653 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.398670 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.398680 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.398690 | orchestrator | 2025-07-12 20:28:02.398700 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:28:02.398709 | orchestrator | Saturday 12 July 2025 20:26:25 +0000 (0:00:00.327) 0:00:09.126 ********* 2025-07-12 20:28:02.398718 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.398728 | orchestrator | 2025-07-12 20:28:02.398737 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:28:02.398747 | orchestrator | Saturday 12 July 2025 20:26:25 +0000 (0:00:00.107) 0:00:09.234 ********* 2025-07-12 20:28:02.398757 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.398766 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.398776 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.398785 | orchestrator | 2025-07-12 20:28:02.398794 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:28:02.398810 | orchestrator | Saturday 12 July 2025 20:26:26 +0000 (0:00:00.286) 0:00:09.521 ********* 2025-07-12 20:28:02.398819 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.398829 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.398838 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.398848 | orchestrator | 2025-07-12 20:28:02.398857 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:28:02.398867 | orchestrator | Saturday 12 July 2025 20:26:26 +0000 (0:00:00.299) 0:00:09.820 ********* 2025-07-12 20:28:02.398876 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.398886 | orchestrator | 2025-07-12 20:28:02.398896 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:28:02.398905 | orchestrator | Saturday 12 July 2025 20:26:26 +0000 (0:00:00.100) 0:00:09.920 ********* 2025-07-12 20:28:02.398914 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.398937 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.398946 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.398956 | orchestrator | 2025-07-12 20:28:02.398965 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:28:02.398975 | orchestrator | Saturday 12 July 2025 20:26:26 +0000 (0:00:00.401) 0:00:10.321 ********* 2025-07-12 20:28:02.398984 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.398994 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.399003 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.399013 | orchestrator | 2025-07-12 20:28:02.399022 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:28:02.399032 | orchestrator | Saturday 12 July 2025 20:26:27 +0000 (0:00:00.280) 0:00:10.602 ********* 2025-07-12 20:28:02.399041 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.399051 | orchestrator | 2025-07-12 20:28:02.399060 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:28:02.399070 | orchestrator | Saturday 12 July 2025 20:26:27 +0000 (0:00:00.106) 0:00:10.708 ********* 2025-07-12 20:28:02.399079 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.399089 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.399099 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.399108 | orchestrator | 2025-07-12 20:28:02.399118 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-07-12 20:28:02.399127 | orchestrator | Saturday 12 July 2025 20:26:27 +0000 (0:00:00.277) 0:00:10.986 ********* 2025-07-12 20:28:02.399137 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:02.399146 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:02.399156 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:02.399165 | orchestrator | 2025-07-12 20:28:02.399175 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-07-12 20:28:02.399184 | orchestrator | Saturday 12 July 2025 20:26:27 +0000 (0:00:00.405) 0:00:11.392 ********* 2025-07-12 20:28:02.399194 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.399203 | orchestrator | 2025-07-12 20:28:02.399213 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-07-12 20:28:02.399222 | orchestrator | Saturday 12 July 2025 20:26:28 +0000 (0:00:00.119) 0:00:11.511 ********* 2025-07-12 20:28:02.399232 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.399241 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.399251 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.399260 | orchestrator | 2025-07-12 20:28:02.399270 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-07-12 20:28:02.399279 | orchestrator | Saturday 12 July 2025 20:26:28 +0000 (0:00:00.284) 0:00:11.796 ********* 2025-07-12 20:28:02.399289 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:28:02.399298 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:28:02.399308 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:02.399317 | orchestrator | 2025-07-12 20:28:02.399327 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-07-12 20:28:02.399337 | orchestrator | Saturday 12 July 2025 20:26:29 +0000 (0:00:01.580) 0:00:13.376 ********* 2025-07-12 20:28:02.399368 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 20:28:02.399379 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 20:28:02.399388 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-07-12 20:28:02.399397 | orchestrator | 2025-07-12 20:28:02.399407 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-07-12 20:28:02.399416 | orchestrator | Saturday 12 July 2025 20:26:31 +0000 (0:00:01.997) 0:00:15.374 ********* 2025-07-12 20:28:02.399426 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 20:28:02.399436 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 20:28:02.399470 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-07-12 20:28:02.399480 | orchestrator | 2025-07-12 20:28:02.399490 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-07-12 20:28:02.399505 | orchestrator | Saturday 12 July 2025 20:26:34 +0000 (0:00:02.034) 0:00:17.408 ********* 2025-07-12 20:28:02.399515 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 20:28:02.399525 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 20:28:02.399535 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-07-12 20:28:02.399544 | orchestrator | 2025-07-12 20:28:02.399554 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-07-12 20:28:02.399564 | orchestrator | Saturday 12 July 2025 20:26:35 +0000 (0:00:01.511) 0:00:18.919 ********* 2025-07-12 20:28:02.399573 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.399583 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.399592 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.399602 | orchestrator | 2025-07-12 20:28:02.399616 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-07-12 20:28:02.399626 | orchestrator | Saturday 12 July 2025 20:26:35 +0000 (0:00:00.312) 0:00:19.232 ********* 2025-07-12 20:28:02.399636 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.399645 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.399655 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.399665 | orchestrator | 2025-07-12 20:28:02.399674 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 20:28:02.399684 | orchestrator | Saturday 12 July 2025 20:26:36 +0000 (0:00:00.334) 0:00:19.567 ********* 2025-07-12 20:28:02.399693 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:28:02.399703 | orchestrator | 2025-07-12 20:28:02.399713 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-07-12 20:28:02.399723 | orchestrator | Saturday 12 July 2025 20:26:37 +0000 (0:00:00.862) 0:00:20.429 ********* 2025-07-12 20:28:02.399736 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:28:02.399770 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:28:02.399782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:28:02.399799 | orchestrator | 2025-07-12 20:28:02.399808 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-07-12 20:28:02.399818 | orchestrator | Saturday 12 July 2025 20:26:38 +0000 (0:00:01.285) 0:00:21.714 ********* 2025-07-12 20:28:02.399853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:28:02.399894 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.399922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:28:02.399950 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.399974 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:28:02.399986 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.399995 | orchestrator | 2025-07-12 20:28:02.400005 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-07-12 20:28:02.400014 | orchestrator | Saturday 12 July 2025 20:26:38 +0000 (0:00:00.614) 0:00:22.328 ********* 2025-07-12 20:28:02.400033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:28:02.400050 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.400066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:28:02.400082 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.400105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-07-12 20:28:02.400116 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.400125 | orchestrator | 2025-07-12 20:28:02.400135 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-07-12 20:28:02.400145 | orchestrator | Saturday 12 July 2025 20:26:39 +0000 (0:00:00.921) 0:00:23.250 ********* 2025-07-12 20:28:02.400155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:28:02.400184 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:28:02.400196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:25.1.1.20250711', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-07-12 20:28:02.400220 | orchestrator | 2025-07-12 20:28:02.400230 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 20:28:02.400240 | orchestrator | Saturday 12 July 2025 20:26:41 +0000 (0:00:01.201) 0:00:24.451 ********* 2025-07-12 20:28:02.400249 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:02.400259 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:02.400268 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:02.400277 | orchestrator | 2025-07-12 20:28:02.400287 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-07-12 20:28:02.400297 | orchestrator | Saturday 12 July 2025 20:26:41 +0000 (0:00:00.296) 0:00:24.748 ********* 2025-07-12 20:28:02.400311 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:28:02.400322 | orchestrator | 2025-07-12 20:28:02.400331 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-07-12 20:28:02.400340 | orchestrator | Saturday 12 July 2025 20:26:41 +0000 (0:00:00.625) 0:00:25.373 ********* 2025-07-12 20:28:02.400381 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:02.400391 | orchestrator | 2025-07-12 20:28:02.400401 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-07-12 20:28:02.400410 | orchestrator | Saturday 12 July 2025 20:26:43 +0000 (0:00:01.789) 0:00:27.163 ********* 2025-07-12 20:28:02.400420 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:02.400429 | orchestrator | 2025-07-12 20:28:02.400439 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-07-12 20:28:02.400449 | orchestrator | Saturday 12 July 2025 20:26:45 +0000 (0:00:01.955) 0:00:29.119 ********* 2025-07-12 20:28:02.400458 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:02.400468 | orchestrator | 2025-07-12 20:28:02.400482 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 20:28:02.400492 | orchestrator | Saturday 12 July 2025 20:27:01 +0000 (0:00:15.792) 0:00:44.912 ********* 2025-07-12 20:28:02.400502 | orchestrator | 2025-07-12 20:28:02.400511 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 20:28:02.400520 | orchestrator | Saturday 12 July 2025 20:27:01 +0000 (0:00:00.066) 0:00:44.978 ********* 2025-07-12 20:28:02.400530 | orchestrator | 2025-07-12 20:28:02.400539 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-07-12 20:28:02.400549 | orchestrator | Saturday 12 July 2025 20:27:01 +0000 (0:00:00.066) 0:00:45.045 ********* 2025-07-12 20:28:02.400558 | orchestrator | 2025-07-12 20:28:02.400568 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-07-12 20:28:02.400577 | orchestrator | Saturday 12 July 2025 20:27:01 +0000 (0:00:00.068) 0:00:45.114 ********* 2025-07-12 20:28:02.400594 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:02.400603 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:28:02.400613 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:28:02.400622 | orchestrator | 2025-07-12 20:28:02.400636 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:28:02.400652 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-07-12 20:28:02.400666 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-12 20:28:02.400681 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-07-12 20:28:02.400697 | orchestrator | 2025-07-12 20:28:02.400712 | orchestrator | 2025-07-12 20:28:02.400727 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:28:02.400743 | orchestrator | Saturday 12 July 2025 20:28:00 +0000 (0:00:58.424) 0:01:43.538 ********* 2025-07-12 20:28:02.400758 | orchestrator | =============================================================================== 2025-07-12 20:28:02.400773 | orchestrator | horizon : Restart horizon container ------------------------------------ 58.42s 2025-07-12 20:28:02.400790 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.79s 2025-07-12 20:28:02.400806 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.03s 2025-07-12 20:28:02.400822 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.00s 2025-07-12 20:28:02.400838 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 1.96s 2025-07-12 20:28:02.400853 | orchestrator | horizon : Creating Horizon database ------------------------------------- 1.79s 2025-07-12 20:28:02.400869 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.58s 2025-07-12 20:28:02.400886 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 1.51s 2025-07-12 20:28:02.400899 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.29s 2025-07-12 20:28:02.400909 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.28s 2025-07-12 20:28:02.400918 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.20s 2025-07-12 20:28:02.400928 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.92s 2025-07-12 20:28:02.400942 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.86s 2025-07-12 20:28:02.400958 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.74s 2025-07-12 20:28:02.400973 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.63s 2025-07-12 20:28:02.400987 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.61s 2025-07-12 20:28:02.401003 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.56s 2025-07-12 20:28:02.401017 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.46s 2025-07-12 20:28:02.401033 | orchestrator | horizon : Update policy file name --------------------------------------- 0.45s 2025-07-12 20:28:02.401048 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.45s 2025-07-12 20:28:02.401064 | orchestrator | 2025-07-12 20:28:02 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:02.401079 | orchestrator | 2025-07-12 20:28:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:05.444414 | orchestrator | 2025-07-12 20:28:05 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:05.446125 | orchestrator | 2025-07-12 20:28:05 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:05.446172 | orchestrator | 2025-07-12 20:28:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:08.489899 | orchestrator | 2025-07-12 20:28:08 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:08.491483 | orchestrator | 2025-07-12 20:28:08 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:08.491551 | orchestrator | 2025-07-12 20:28:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:11.531982 | orchestrator | 2025-07-12 20:28:11 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:11.532616 | orchestrator | 2025-07-12 20:28:11 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:11.534322 | orchestrator | 2025-07-12 20:28:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:14.576188 | orchestrator | 2025-07-12 20:28:14 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:14.577238 | orchestrator | 2025-07-12 20:28:14 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:14.577611 | orchestrator | 2025-07-12 20:28:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:17.622692 | orchestrator | 2025-07-12 20:28:17 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:17.623612 | orchestrator | 2025-07-12 20:28:17 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:17.623787 | orchestrator | 2025-07-12 20:28:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:20.666954 | orchestrator | 2025-07-12 20:28:20 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:20.669531 | orchestrator | 2025-07-12 20:28:20 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:20.669946 | orchestrator | 2025-07-12 20:28:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:23.715958 | orchestrator | 2025-07-12 20:28:23 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:23.718675 | orchestrator | 2025-07-12 20:28:23 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:23.718731 | orchestrator | 2025-07-12 20:28:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:26.765549 | orchestrator | 2025-07-12 20:28:26 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:26.767224 | orchestrator | 2025-07-12 20:28:26 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:26.767257 | orchestrator | 2025-07-12 20:28:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:29.818300 | orchestrator | 2025-07-12 20:28:29 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:29.822242 | orchestrator | 2025-07-12 20:28:29 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:29.822336 | orchestrator | 2025-07-12 20:28:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:32.873617 | orchestrator | 2025-07-12 20:28:32 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:32.875546 | orchestrator | 2025-07-12 20:28:32 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:32.875582 | orchestrator | 2025-07-12 20:28:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:35.919331 | orchestrator | 2025-07-12 20:28:35 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:35.922274 | orchestrator | 2025-07-12 20:28:35 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:35.922409 | orchestrator | 2025-07-12 20:28:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:38.972774 | orchestrator | 2025-07-12 20:28:38 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:38.974248 | orchestrator | 2025-07-12 20:28:38 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:38.974337 | orchestrator | 2025-07-12 20:28:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:42.029731 | orchestrator | 2025-07-12 20:28:42 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:42.031477 | orchestrator | 2025-07-12 20:28:42 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:42.031527 | orchestrator | 2025-07-12 20:28:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:45.077962 | orchestrator | 2025-07-12 20:28:45 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:45.078993 | orchestrator | 2025-07-12 20:28:45 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:45.079167 | orchestrator | 2025-07-12 20:28:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:48.123409 | orchestrator | 2025-07-12 20:28:48 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state STARTED 2025-07-12 20:28:48.125906 | orchestrator | 2025-07-12 20:28:48 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:48.126148 | orchestrator | 2025-07-12 20:28:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:51.187511 | orchestrator | 2025-07-12 20:28:51 | INFO  | Task f29a1b76-fc02-43f7-a750-9050303912a9 is in state STARTED 2025-07-12 20:28:51.189042 | orchestrator | 2025-07-12 20:28:51 | INFO  | Task e2c10b42-c629-40ce-95cb-7fd881d2ba6d is in state SUCCESS 2025-07-12 20:28:51.189083 | orchestrator | 2025-07-12 20:28:51 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:28:51.189830 | orchestrator | 2025-07-12 20:28:51 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:51.190645 | orchestrator | 2025-07-12 20:28:51 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:28:51.190727 | orchestrator | 2025-07-12 20:28:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:54.252795 | orchestrator | 2025-07-12 20:28:54 | INFO  | Task f29a1b76-fc02-43f7-a750-9050303912a9 is in state STARTED 2025-07-12 20:28:54.253045 | orchestrator | 2025-07-12 20:28:54 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:28:54.254246 | orchestrator | 2025-07-12 20:28:54 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state STARTED 2025-07-12 20:28:54.258810 | orchestrator | 2025-07-12 20:28:54 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:28:54.260338 | orchestrator | 2025-07-12 20:28:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:28:57.314514 | orchestrator | 2025-07-12 20:28:57 | INFO  | Task f29a1b76-fc02-43f7-a750-9050303912a9 is in state SUCCESS 2025-07-12 20:28:57.316526 | orchestrator | 2025-07-12 20:28:57 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:28:57.318371 | orchestrator | 2025-07-12 20:28:57 | INFO  | Task 63901411-932f-411c-ad2e-4e240caca7ea is in state SUCCESS 2025-07-12 20:28:57.320669 | orchestrator | 2025-07-12 20:28:57.320711 | orchestrator | 2025-07-12 20:28:57.320724 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-07-12 20:28:57.320766 | orchestrator | 2025-07-12 20:28:57.320778 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-07-12 20:28:57.320789 | orchestrator | Saturday 12 July 2025 20:27:51 +0000 (0:00:00.242) 0:00:00.242 ********* 2025-07-12 20:28:57.320801 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-07-12 20:28:57.320814 | orchestrator | 2025-07-12 20:28:57.320825 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-07-12 20:28:57.320835 | orchestrator | Saturday 12 July 2025 20:27:52 +0000 (0:00:00.216) 0:00:00.459 ********* 2025-07-12 20:28:57.320847 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-07-12 20:28:57.320858 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-07-12 20:28:57.320869 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-07-12 20:28:57.320880 | orchestrator | 2025-07-12 20:28:57.320891 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-07-12 20:28:57.320901 | orchestrator | Saturday 12 July 2025 20:27:53 +0000 (0:00:01.355) 0:00:01.814 ********* 2025-07-12 20:28:57.320912 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-07-12 20:28:57.320923 | orchestrator | 2025-07-12 20:28:57.320934 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-07-12 20:28:57.320944 | orchestrator | Saturday 12 July 2025 20:27:54 +0000 (0:00:01.254) 0:00:03.069 ********* 2025-07-12 20:28:57.320955 | orchestrator | changed: [testbed-manager] 2025-07-12 20:28:57.320966 | orchestrator | 2025-07-12 20:28:57.320976 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-07-12 20:28:57.320987 | orchestrator | Saturday 12 July 2025 20:27:55 +0000 (0:00:01.089) 0:00:04.158 ********* 2025-07-12 20:28:57.320998 | orchestrator | changed: [testbed-manager] 2025-07-12 20:28:57.321008 | orchestrator | 2025-07-12 20:28:57.321019 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-07-12 20:28:57.321029 | orchestrator | Saturday 12 July 2025 20:27:56 +0000 (0:00:00.913) 0:00:05.072 ********* 2025-07-12 20:28:57.321044 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-07-12 20:28:57.321063 | orchestrator | ok: [testbed-manager] 2025-07-12 20:28:57.321089 | orchestrator | 2025-07-12 20:28:57.321110 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-07-12 20:28:57.321128 | orchestrator | Saturday 12 July 2025 20:28:38 +0000 (0:00:41.871) 0:00:46.944 ********* 2025-07-12 20:28:57.321146 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-07-12 20:28:57.321165 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-07-12 20:28:57.321183 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-07-12 20:28:57.321203 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-07-12 20:28:57.321223 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-07-12 20:28:57.321241 | orchestrator | 2025-07-12 20:28:57.321257 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-07-12 20:28:57.321284 | orchestrator | Saturday 12 July 2025 20:28:42 +0000 (0:00:04.267) 0:00:51.211 ********* 2025-07-12 20:28:57.321298 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-07-12 20:28:57.321310 | orchestrator | 2025-07-12 20:28:57.321322 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-07-12 20:28:57.321334 | orchestrator | Saturday 12 July 2025 20:28:43 +0000 (0:00:00.463) 0:00:51.674 ********* 2025-07-12 20:28:57.321435 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:28:57.321450 | orchestrator | 2025-07-12 20:28:57.321462 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-07-12 20:28:57.321474 | orchestrator | Saturday 12 July 2025 20:28:43 +0000 (0:00:00.138) 0:00:51.812 ********* 2025-07-12 20:28:57.321487 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:28:57.321563 | orchestrator | 2025-07-12 20:28:57.321580 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-07-12 20:28:57.321597 | orchestrator | Saturday 12 July 2025 20:28:43 +0000 (0:00:00.300) 0:00:52.113 ********* 2025-07-12 20:28:57.321612 | orchestrator | changed: [testbed-manager] 2025-07-12 20:28:57.321630 | orchestrator | 2025-07-12 20:28:57.321641 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-07-12 20:28:57.321652 | orchestrator | Saturday 12 July 2025 20:28:45 +0000 (0:00:01.675) 0:00:53.789 ********* 2025-07-12 20:28:57.321663 | orchestrator | changed: [testbed-manager] 2025-07-12 20:28:57.321674 | orchestrator | 2025-07-12 20:28:57.321687 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-07-12 20:28:57.321706 | orchestrator | Saturday 12 July 2025 20:28:46 +0000 (0:00:01.005) 0:00:54.794 ********* 2025-07-12 20:28:57.321722 | orchestrator | changed: [testbed-manager] 2025-07-12 20:28:57.321732 | orchestrator | 2025-07-12 20:28:57.321742 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-07-12 20:28:57.321751 | orchestrator | Saturday 12 July 2025 20:28:47 +0000 (0:00:00.670) 0:00:55.465 ********* 2025-07-12 20:28:57.321760 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-07-12 20:28:57.321770 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-07-12 20:28:57.321779 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-07-12 20:28:57.321789 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-07-12 20:28:57.321798 | orchestrator | 2025-07-12 20:28:57.321807 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:28:57.321817 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:28:57.321828 | orchestrator | 2025-07-12 20:28:57.321837 | orchestrator | 2025-07-12 20:28:57.321909 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:28:57.321921 | orchestrator | Saturday 12 July 2025 20:28:48 +0000 (0:00:01.611) 0:00:57.076 ********* 2025-07-12 20:28:57.321930 | orchestrator | =============================================================================== 2025-07-12 20:28:57.321940 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.87s 2025-07-12 20:28:57.321949 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.27s 2025-07-12 20:28:57.321959 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.68s 2025-07-12 20:28:57.321969 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.61s 2025-07-12 20:28:57.321978 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.36s 2025-07-12 20:28:57.321988 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.25s 2025-07-12 20:28:57.321997 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 1.09s 2025-07-12 20:28:57.322007 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 1.01s 2025-07-12 20:28:57.322145 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.91s 2025-07-12 20:28:57.322160 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.67s 2025-07-12 20:28:57.322170 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.46s 2025-07-12 20:28:57.322179 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.30s 2025-07-12 20:28:57.322189 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-07-12 20:28:57.322198 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-07-12 20:28:57.322208 | orchestrator | 2025-07-12 20:28:57.322217 | orchestrator | 2025-07-12 20:28:57.322227 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:28:57.322236 | orchestrator | 2025-07-12 20:28:57.322246 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:28:57.322289 | orchestrator | Saturday 12 July 2025 20:28:53 +0000 (0:00:00.180) 0:00:00.180 ********* 2025-07-12 20:28:57.322300 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:57.322310 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:57.322319 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:57.322329 | orchestrator | 2025-07-12 20:28:57.322338 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:28:57.322384 | orchestrator | Saturday 12 July 2025 20:28:53 +0000 (0:00:00.304) 0:00:00.485 ********* 2025-07-12 20:28:57.322395 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-12 20:28:57.322405 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-12 20:28:57.322415 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-12 20:28:57.322424 | orchestrator | 2025-07-12 20:28:57.322434 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-07-12 20:28:57.322443 | orchestrator | 2025-07-12 20:28:57.322452 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-07-12 20:28:57.322462 | orchestrator | Saturday 12 July 2025 20:28:54 +0000 (0:00:00.728) 0:00:01.213 ********* 2025-07-12 20:28:57.322471 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:57.322489 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:57.322499 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:57.322508 | orchestrator | 2025-07-12 20:28:57.322518 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:28:57.322528 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:28:57.322539 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:28:57.322548 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:28:57.322558 | orchestrator | 2025-07-12 20:28:57.322567 | orchestrator | 2025-07-12 20:28:57.322579 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:28:57.322595 | orchestrator | Saturday 12 July 2025 20:28:55 +0000 (0:00:00.670) 0:00:01.885 ********* 2025-07-12 20:28:57.322612 | orchestrator | =============================================================================== 2025-07-12 20:28:57.322629 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-07-12 20:28:57.322639 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.67s 2025-07-12 20:28:57.322648 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-07-12 20:28:57.322658 | orchestrator | 2025-07-12 20:28:57.322667 | orchestrator | 2025-07-12 20:28:57.322676 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:28:57.322685 | orchestrator | 2025-07-12 20:28:57.322695 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:28:57.322704 | orchestrator | Saturday 12 July 2025 20:26:16 +0000 (0:00:00.296) 0:00:00.296 ********* 2025-07-12 20:28:57.322713 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:57.322723 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:57.322732 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:57.322742 | orchestrator | 2025-07-12 20:28:57.322751 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:28:57.322760 | orchestrator | Saturday 12 July 2025 20:26:17 +0000 (0:00:00.340) 0:00:00.637 ********* 2025-07-12 20:28:57.322769 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-07-12 20:28:57.322779 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-07-12 20:28:57.322788 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-07-12 20:28:57.322798 | orchestrator | 2025-07-12 20:28:57.322807 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-07-12 20:28:57.322817 | orchestrator | 2025-07-12 20:28:57.322867 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:28:57.322889 | orchestrator | Saturday 12 July 2025 20:26:17 +0000 (0:00:00.459) 0:00:01.096 ********* 2025-07-12 20:28:57.322899 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:28:57.322908 | orchestrator | 2025-07-12 20:28:57.322918 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-07-12 20:28:57.322928 | orchestrator | Saturday 12 July 2025 20:26:18 +0000 (0:00:00.552) 0:00:01.649 ********* 2025-07-12 20:28:57.322947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.322974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.322987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.323029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323107 | orchestrator | 2025-07-12 20:28:57.323117 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-07-12 20:28:57.323192 | orchestrator | Saturday 12 July 2025 20:26:20 +0000 (0:00:01.962) 0:00:03.612 ********* 2025-07-12 20:28:57.323203 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-07-12 20:28:57.323213 | orchestrator | 2025-07-12 20:28:57.323222 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-07-12 20:28:57.323239 | orchestrator | Saturday 12 July 2025 20:26:20 +0000 (0:00:00.763) 0:00:04.376 ********* 2025-07-12 20:28:57.323249 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:57.323258 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:57.323267 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:57.323277 | orchestrator | 2025-07-12 20:28:57.323286 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-07-12 20:28:57.323296 | orchestrator | Saturday 12 July 2025 20:26:21 +0000 (0:00:00.384) 0:00:04.760 ********* 2025-07-12 20:28:57.323305 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:28:57.323314 | orchestrator | 2025-07-12 20:28:57.323324 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:28:57.323456 | orchestrator | Saturday 12 July 2025 20:26:21 +0000 (0:00:00.606) 0:00:05.366 ********* 2025-07-12 20:28:57.323471 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:28:57.323480 | orchestrator | 2025-07-12 20:28:57.323490 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-07-12 20:28:57.323499 | orchestrator | Saturday 12 July 2025 20:26:22 +0000 (0:00:00.536) 0:00:05.902 ********* 2025-07-12 20:28:57.323510 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.323529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.323541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.323559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323580 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323590 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.323650 | orchestrator | 2025-07-12 20:28:57.323660 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-07-12 20:28:57.323670 | orchestrator | Saturday 12 July 2025 20:26:25 +0000 (0:00:03.238) 0:00:09.141 ********* 2025-07-12 20:28:57.323687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:28:57.323698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.323712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:28:57.323729 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.323755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:28:57.323771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.323788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:28:57.323798 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:57.323818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:28:57.323830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.323841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:28:57.323852 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:57.323863 | orchestrator | 2025-07-12 20:28:57.323874 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-07-12 20:28:57.323885 | orchestrator | Saturday 12 July 2025 20:26:26 +0000 (0:00:00.506) 0:00:09.647 ********* 2025-07-12 20:28:57.323903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:28:57.323922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.323940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:28:57.323950 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.323960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:28:57.323971 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.323986 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:28:57.324003 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:57.324013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-07-12 20:28:57.324031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.324041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-07-12 20:28:57.324052 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:57.324061 | orchestrator | 2025-07-12 20:28:57.324071 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-07-12 20:28:57.324081 | orchestrator | Saturday 12 July 2025 20:26:26 +0000 (0:00:00.704) 0:00:10.351 ********* 2025-07-12 20:28:57.324091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.324113 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.324131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.324142 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324152 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324215 | orchestrator | 2025-07-12 20:28:57.324225 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-07-12 20:28:57.324235 | orchestrator | Saturday 12 July 2025 20:26:30 +0000 (0:00:03.199) 0:00:13.551 ********* 2025-07-12 20:28:57.324252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.324263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.324286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.324297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.324314 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.324325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.324335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324429 | orchestrator | 2025-07-12 20:28:57.324446 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-07-12 20:28:57.324457 | orchestrator | Saturday 12 July 2025 20:26:34 +0000 (0:00:04.898) 0:00:18.449 ********* 2025-07-12 20:28:57.324466 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:57.324476 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:28:57.324486 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:28:57.324495 | orchestrator | 2025-07-12 20:28:57.324505 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-07-12 20:28:57.324514 | orchestrator | Saturday 12 July 2025 20:26:36 +0000 (0:00:01.441) 0:00:19.891 ********* 2025-07-12 20:28:57.324523 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.324533 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:57.324542 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:57.324551 | orchestrator | 2025-07-12 20:28:57.324561 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-07-12 20:28:57.324570 | orchestrator | Saturday 12 July 2025 20:26:36 +0000 (0:00:00.604) 0:00:20.495 ********* 2025-07-12 20:28:57.324580 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.324590 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:57.324599 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:57.324608 | orchestrator | 2025-07-12 20:28:57.324617 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-07-12 20:28:57.324627 | orchestrator | Saturday 12 July 2025 20:26:37 +0000 (0:00:00.459) 0:00:20.955 ********* 2025-07-12 20:28:57.324636 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.324645 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:57.324655 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:57.324664 | orchestrator | 2025-07-12 20:28:57.324673 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-07-12 20:28:57.324683 | orchestrator | Saturday 12 July 2025 20:26:37 +0000 (0:00:00.282) 0:00:21.237 ********* 2025-07-12 20:28:57.324708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.324730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.324770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.324783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.324799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.324810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-07-12 20:28:57.324828 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.324863 | orchestrator | 2025-07-12 20:28:57.324873 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:28:57.324883 | orchestrator | Saturday 12 July 2025 20:26:39 +0000 (0:00:02.011) 0:00:23.249 ********* 2025-07-12 20:28:57.324892 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.324902 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:57.324911 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:57.324920 | orchestrator | 2025-07-12 20:28:57.324930 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-07-12 20:28:57.324939 | orchestrator | Saturday 12 July 2025 20:26:40 +0000 (0:00:00.294) 0:00:23.544 ********* 2025-07-12 20:28:57.324949 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 20:28:57.324958 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 20:28:57.324968 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-07-12 20:28:57.324977 | orchestrator | 2025-07-12 20:28:57.324987 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-07-12 20:28:57.324996 | orchestrator | Saturday 12 July 2025 20:26:41 +0000 (0:00:01.837) 0:00:25.381 ********* 2025-07-12 20:28:57.325006 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:28:57.325015 | orchestrator | 2025-07-12 20:28:57.325024 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-07-12 20:28:57.325034 | orchestrator | Saturday 12 July 2025 20:26:42 +0000 (0:00:00.890) 0:00:26.272 ********* 2025-07-12 20:28:57.325050 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.325059 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:57.325069 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:57.325078 | orchestrator | 2025-07-12 20:28:57.325088 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-07-12 20:28:57.325097 | orchestrator | Saturday 12 July 2025 20:26:43 +0000 (0:00:00.494) 0:00:26.767 ********* 2025-07-12 20:28:57.325107 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 20:28:57.325122 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 20:28:57.325131 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:28:57.325141 | orchestrator | 2025-07-12 20:28:57.325150 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-07-12 20:28:57.325160 | orchestrator | Saturday 12 July 2025 20:26:44 +0000 (0:00:01.250) 0:00:28.017 ********* 2025-07-12 20:28:57.325169 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:57.325179 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:57.325188 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:57.325198 | orchestrator | 2025-07-12 20:28:57.325207 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-07-12 20:28:57.325217 | orchestrator | Saturday 12 July 2025 20:26:44 +0000 (0:00:00.312) 0:00:28.329 ********* 2025-07-12 20:28:57.325226 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 20:28:57.325236 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 20:28:57.325245 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-07-12 20:28:57.325255 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 20:28:57.325264 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 20:28:57.325274 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-07-12 20:28:57.325283 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 20:28:57.325293 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 20:28:57.325303 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-07-12 20:28:57.325312 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 20:28:57.325321 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 20:28:57.325331 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-07-12 20:28:57.325340 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 20:28:57.325372 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 20:28:57.325382 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-07-12 20:28:57.325392 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:28:57.325401 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:28:57.325416 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:28:57.325426 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:28:57.325435 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:28:57.325444 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:28:57.325461 | orchestrator | 2025-07-12 20:28:57.325470 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-07-12 20:28:57.325480 | orchestrator | Saturday 12 July 2025 20:26:53 +0000 (0:00:08.671) 0:00:37.001 ********* 2025-07-12 20:28:57.325489 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:28:57.325499 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:28:57.325508 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:28:57.325518 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:28:57.325529 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:28:57.325545 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:28:57.325556 | orchestrator | 2025-07-12 20:28:57.325566 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-07-12 20:28:57.325575 | orchestrator | Saturday 12 July 2025 20:26:55 +0000 (0:00:02.455) 0:00:39.456 ********* 2025-07-12 20:28:57.325594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.325606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.325623 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-07-12 20:28:57.325640 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.325651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.325669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-07-12 20:28:57.325679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.325689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.325699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-07-12 20:28:57.325719 | orchestrator | 2025-07-12 20:28:57.325728 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:28:57.325743 | orchestrator | Saturday 12 July 2025 20:26:58 +0000 (0:00:02.442) 0:00:41.899 ********* 2025-07-12 20:28:57.325753 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.325763 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:57.325772 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:57.325782 | orchestrator | 2025-07-12 20:28:57.325792 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-07-12 20:28:57.325801 | orchestrator | Saturday 12 July 2025 20:26:58 +0000 (0:00:00.301) 0:00:42.200 ********* 2025-07-12 20:28:57.325811 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:57.325820 | orchestrator | 2025-07-12 20:28:57.325829 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-07-12 20:28:57.325839 | orchestrator | Saturday 12 July 2025 20:27:00 +0000 (0:00:02.238) 0:00:44.439 ********* 2025-07-12 20:28:57.325848 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:57.325858 | orchestrator | 2025-07-12 20:28:57.325867 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-07-12 20:28:57.325877 | orchestrator | Saturday 12 July 2025 20:27:03 +0000 (0:00:02.567) 0:00:47.006 ********* 2025-07-12 20:28:57.325886 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:57.325896 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:57.325906 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:57.325915 | orchestrator | 2025-07-12 20:28:57.325924 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-07-12 20:28:57.325934 | orchestrator | Saturday 12 July 2025 20:27:04 +0000 (0:00:00.854) 0:00:47.860 ********* 2025-07-12 20:28:57.325944 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:57.325953 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:57.325962 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:57.325972 | orchestrator | 2025-07-12 20:28:57.325981 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-07-12 20:28:57.325991 | orchestrator | Saturday 12 July 2025 20:27:04 +0000 (0:00:00.331) 0:00:48.192 ********* 2025-07-12 20:28:57.326000 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.326009 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:57.326057 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:57.326067 | orchestrator | 2025-07-12 20:28:57.326076 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-07-12 20:28:57.326086 | orchestrator | Saturday 12 July 2025 20:27:05 +0000 (0:00:00.379) 0:00:48.572 ********* 2025-07-12 20:28:57.326095 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:57.326105 | orchestrator | 2025-07-12 20:28:57.326114 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-07-12 20:28:57.326124 | orchestrator | Saturday 12 July 2025 20:27:18 +0000 (0:00:13.340) 0:01:01.912 ********* 2025-07-12 20:28:57.326133 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:57.326143 | orchestrator | 2025-07-12 20:28:57.326158 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 20:28:57.326168 | orchestrator | Saturday 12 July 2025 20:27:27 +0000 (0:00:08.918) 0:01:10.831 ********* 2025-07-12 20:28:57.326178 | orchestrator | 2025-07-12 20:28:57.326187 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 20:28:57.326197 | orchestrator | Saturday 12 July 2025 20:27:27 +0000 (0:00:00.255) 0:01:11.086 ********* 2025-07-12 20:28:57.326206 | orchestrator | 2025-07-12 20:28:57.326216 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-07-12 20:28:57.326225 | orchestrator | Saturday 12 July 2025 20:27:27 +0000 (0:00:00.071) 0:01:11.157 ********* 2025-07-12 20:28:57.326235 | orchestrator | 2025-07-12 20:28:57.326244 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-07-12 20:28:57.326254 | orchestrator | Saturday 12 July 2025 20:27:27 +0000 (0:00:00.064) 0:01:11.222 ********* 2025-07-12 20:28:57.326271 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:57.326281 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:28:57.326290 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:28:57.326299 | orchestrator | 2025-07-12 20:28:57.326309 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-07-12 20:28:57.326318 | orchestrator | Saturday 12 July 2025 20:27:51 +0000 (0:00:23.458) 0:01:34.681 ********* 2025-07-12 20:28:57.326328 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:57.326337 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:28:57.326404 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:28:57.326416 | orchestrator | 2025-07-12 20:28:57.326426 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-07-12 20:28:57.326435 | orchestrator | Saturday 12 July 2025 20:28:00 +0000 (0:00:09.668) 0:01:44.349 ********* 2025-07-12 20:28:57.326444 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:57.326454 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:28:57.326463 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:28:57.326472 | orchestrator | 2025-07-12 20:28:57.326482 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:28:57.326491 | orchestrator | Saturday 12 July 2025 20:28:07 +0000 (0:00:06.447) 0:01:50.797 ********* 2025-07-12 20:28:57.326501 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:28:57.326510 | orchestrator | 2025-07-12 20:28:57.326520 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-07-12 20:28:57.326529 | orchestrator | Saturday 12 July 2025 20:28:08 +0000 (0:00:00.759) 0:01:51.556 ********* 2025-07-12 20:28:57.326538 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:57.326548 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:28:57.326557 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:28:57.326566 | orchestrator | 2025-07-12 20:28:57.326576 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-07-12 20:28:57.326585 | orchestrator | Saturday 12 July 2025 20:28:08 +0000 (0:00:00.720) 0:01:52.277 ********* 2025-07-12 20:28:57.326595 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:28:57.326604 | orchestrator | 2025-07-12 20:28:57.326613 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-07-12 20:28:57.326621 | orchestrator | Saturday 12 July 2025 20:28:10 +0000 (0:00:01.746) 0:01:54.023 ********* 2025-07-12 20:28:57.326629 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-07-12 20:28:57.326636 | orchestrator | 2025-07-12 20:28:57.326649 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-07-12 20:28:57.326657 | orchestrator | Saturday 12 July 2025 20:28:19 +0000 (0:00:09.286) 0:02:03.310 ********* 2025-07-12 20:28:57.326665 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-07-12 20:28:57.326673 | orchestrator | 2025-07-12 20:28:57.326680 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-07-12 20:28:57.326688 | orchestrator | Saturday 12 July 2025 20:28:38 +0000 (0:00:18.215) 0:02:21.526 ********* 2025-07-12 20:28:57.326696 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-07-12 20:28:57.326704 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-07-12 20:28:57.326712 | orchestrator | 2025-07-12 20:28:57.326719 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-07-12 20:28:57.326727 | orchestrator | Saturday 12 July 2025 20:28:51 +0000 (0:00:13.200) 0:02:34.726 ********* 2025-07-12 20:28:57.326735 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.326742 | orchestrator | 2025-07-12 20:28:57.326750 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-07-12 20:28:57.326758 | orchestrator | Saturday 12 July 2025 20:28:51 +0000 (0:00:00.117) 0:02:34.844 ********* 2025-07-12 20:28:57.326771 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.326779 | orchestrator | 2025-07-12 20:28:57.326787 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-07-12 20:28:57.326795 | orchestrator | Saturday 12 July 2025 20:28:51 +0000 (0:00:00.116) 0:02:34.960 ********* 2025-07-12 20:28:57.326802 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.326810 | orchestrator | 2025-07-12 20:28:57.326818 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-07-12 20:28:57.326826 | orchestrator | Saturday 12 July 2025 20:28:51 +0000 (0:00:00.121) 0:02:35.082 ********* 2025-07-12 20:28:57.326833 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.326841 | orchestrator | 2025-07-12 20:28:57.326849 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-07-12 20:28:57.326856 | orchestrator | Saturday 12 July 2025 20:28:51 +0000 (0:00:00.343) 0:02:35.425 ********* 2025-07-12 20:28:57.326864 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:28:57.326872 | orchestrator | 2025-07-12 20:28:57.326879 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-07-12 20:28:57.326887 | orchestrator | Saturday 12 July 2025 20:28:55 +0000 (0:00:03.399) 0:02:38.824 ********* 2025-07-12 20:28:57.326895 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:28:57.326902 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:28:57.326910 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:28:57.326917 | orchestrator | 2025-07-12 20:28:57.326930 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:28:57.326939 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-07-12 20:28:57.326948 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-12 20:28:57.326956 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-07-12 20:28:57.326964 | orchestrator | 2025-07-12 20:28:57.326972 | orchestrator | 2025-07-12 20:28:57.326979 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:28:57.326987 | orchestrator | Saturday 12 July 2025 20:28:56 +0000 (0:00:00.797) 0:02:39.622 ********* 2025-07-12 20:28:57.326994 | orchestrator | =============================================================================== 2025-07-12 20:28:57.327002 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 23.46s 2025-07-12 20:28:57.327010 | orchestrator | service-ks-register : keystone | Creating services --------------------- 18.22s 2025-07-12 20:28:57.327017 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.34s 2025-07-12 20:28:57.327025 | orchestrator | service-ks-register : keystone | Creating endpoints -------------------- 13.20s 2025-07-12 20:28:57.327033 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.67s 2025-07-12 20:28:57.327041 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.29s 2025-07-12 20:28:57.327048 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 8.92s 2025-07-12 20:28:57.327056 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.67s 2025-07-12 20:28:57.327063 | orchestrator | keystone : Restart keystone container ----------------------------------- 6.45s 2025-07-12 20:28:57.327071 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 4.90s 2025-07-12 20:28:57.327079 | orchestrator | keystone : Creating default user role ----------------------------------- 3.40s 2025-07-12 20:28:57.327086 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.24s 2025-07-12 20:28:57.327094 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.20s 2025-07-12 20:28:57.327102 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.57s 2025-07-12 20:28:57.327118 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.46s 2025-07-12 20:28:57.327126 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.44s 2025-07-12 20:28:57.327134 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.24s 2025-07-12 20:28:57.327141 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.01s 2025-07-12 20:28:57.327154 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.96s 2025-07-12 20:28:57.327162 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.84s 2025-07-12 20:28:57.327169 | orchestrator | 2025-07-12 20:28:57 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:28:57.327177 | orchestrator | 2025-07-12 20:28:57 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:28:57.327185 | orchestrator | 2025-07-12 20:28:57 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:28:57.327192 | orchestrator | 2025-07-12 20:28:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:00.375991 | orchestrator | 2025-07-12 20:29:00 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:00.376106 | orchestrator | 2025-07-12 20:29:00 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:00.376116 | orchestrator | 2025-07-12 20:29:00 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:00.376123 | orchestrator | 2025-07-12 20:29:00 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:00.376919 | orchestrator | 2025-07-12 20:29:00 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:00.376991 | orchestrator | 2025-07-12 20:29:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:03.418878 | orchestrator | 2025-07-12 20:29:03 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:03.420081 | orchestrator | 2025-07-12 20:29:03 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:03.426623 | orchestrator | 2025-07-12 20:29:03 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:03.427067 | orchestrator | 2025-07-12 20:29:03 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:03.427969 | orchestrator | 2025-07-12 20:29:03 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:03.428019 | orchestrator | 2025-07-12 20:29:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:06.481409 | orchestrator | 2025-07-12 20:29:06 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:06.481928 | orchestrator | 2025-07-12 20:29:06 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:06.483682 | orchestrator | 2025-07-12 20:29:06 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:06.484583 | orchestrator | 2025-07-12 20:29:06 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:06.485473 | orchestrator | 2025-07-12 20:29:06 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:06.485528 | orchestrator | 2025-07-12 20:29:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:09.533546 | orchestrator | 2025-07-12 20:29:09 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:09.536865 | orchestrator | 2025-07-12 20:29:09 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:09.538786 | orchestrator | 2025-07-12 20:29:09 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:09.540341 | orchestrator | 2025-07-12 20:29:09 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:09.541882 | orchestrator | 2025-07-12 20:29:09 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:09.541918 | orchestrator | 2025-07-12 20:29:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:12.584788 | orchestrator | 2025-07-12 20:29:12 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:12.585395 | orchestrator | 2025-07-12 20:29:12 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:12.586788 | orchestrator | 2025-07-12 20:29:12 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:12.587918 | orchestrator | 2025-07-12 20:29:12 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:12.589655 | orchestrator | 2025-07-12 20:29:12 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:12.589746 | orchestrator | 2025-07-12 20:29:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:15.636691 | orchestrator | 2025-07-12 20:29:15 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:15.639715 | orchestrator | 2025-07-12 20:29:15 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:15.641610 | orchestrator | 2025-07-12 20:29:15 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:15.644607 | orchestrator | 2025-07-12 20:29:15 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:15.646474 | orchestrator | 2025-07-12 20:29:15 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:15.646560 | orchestrator | 2025-07-12 20:29:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:18.688577 | orchestrator | 2025-07-12 20:29:18 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:18.689118 | orchestrator | 2025-07-12 20:29:18 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:18.691137 | orchestrator | 2025-07-12 20:29:18 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:18.693192 | orchestrator | 2025-07-12 20:29:18 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:18.694203 | orchestrator | 2025-07-12 20:29:18 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:18.694225 | orchestrator | 2025-07-12 20:29:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:21.734419 | orchestrator | 2025-07-12 20:29:21 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:21.734516 | orchestrator | 2025-07-12 20:29:21 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:21.734527 | orchestrator | 2025-07-12 20:29:21 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:21.735428 | orchestrator | 2025-07-12 20:29:21 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:21.736259 | orchestrator | 2025-07-12 20:29:21 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:21.736291 | orchestrator | 2025-07-12 20:29:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:24.786500 | orchestrator | 2025-07-12 20:29:24 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:24.789988 | orchestrator | 2025-07-12 20:29:24 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:24.790136 | orchestrator | 2025-07-12 20:29:24 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:24.790151 | orchestrator | 2025-07-12 20:29:24 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:24.793159 | orchestrator | 2025-07-12 20:29:24 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:24.793226 | orchestrator | 2025-07-12 20:29:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:27.841064 | orchestrator | 2025-07-12 20:29:27 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:27.841195 | orchestrator | 2025-07-12 20:29:27 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:27.841224 | orchestrator | 2025-07-12 20:29:27 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:27.841666 | orchestrator | 2025-07-12 20:29:27 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:27.842406 | orchestrator | 2025-07-12 20:29:27 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:27.843711 | orchestrator | 2025-07-12 20:29:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:30.881447 | orchestrator | 2025-07-12 20:29:30 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:30.882149 | orchestrator | 2025-07-12 20:29:30 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:30.882891 | orchestrator | 2025-07-12 20:29:30 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:30.884004 | orchestrator | 2025-07-12 20:29:30 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:30.885059 | orchestrator | 2025-07-12 20:29:30 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:30.885142 | orchestrator | 2025-07-12 20:29:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:33.921908 | orchestrator | 2025-07-12 20:29:33 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:33.922004 | orchestrator | 2025-07-12 20:29:33 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:33.922094 | orchestrator | 2025-07-12 20:29:33 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:33.922293 | orchestrator | 2025-07-12 20:29:33 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:33.923033 | orchestrator | 2025-07-12 20:29:33 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:33.923065 | orchestrator | 2025-07-12 20:29:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:36.960797 | orchestrator | 2025-07-12 20:29:36 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:36.960877 | orchestrator | 2025-07-12 20:29:36 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:36.960883 | orchestrator | 2025-07-12 20:29:36 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:36.960888 | orchestrator | 2025-07-12 20:29:36 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:36.960892 | orchestrator | 2025-07-12 20:29:36 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:36.960915 | orchestrator | 2025-07-12 20:29:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:39.993257 | orchestrator | 2025-07-12 20:29:39 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:39.993475 | orchestrator | 2025-07-12 20:29:39 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:39.996864 | orchestrator | 2025-07-12 20:29:39 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state STARTED 2025-07-12 20:29:39.996928 | orchestrator | 2025-07-12 20:29:39 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:39.997270 | orchestrator | 2025-07-12 20:29:39 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:39.997296 | orchestrator | 2025-07-12 20:29:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:43.035938 | orchestrator | 2025-07-12 20:29:43 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:29:43.036188 | orchestrator | 2025-07-12 20:29:43 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:43.037044 | orchestrator | 2025-07-12 20:29:43 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:43.037721 | orchestrator | 2025-07-12 20:29:43 | INFO  | Task 39e2d3f2-f453-41d7-8de4-2befadaf1158 is in state SUCCESS 2025-07-12 20:29:43.038487 | orchestrator | 2025-07-12 20:29:43 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:43.039745 | orchestrator | 2025-07-12 20:29:43 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:43.039842 | orchestrator | 2025-07-12 20:29:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:46.077817 | orchestrator | 2025-07-12 20:29:46 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:29:46.078525 | orchestrator | 2025-07-12 20:29:46 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:46.079609 | orchestrator | 2025-07-12 20:29:46 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:46.081443 | orchestrator | 2025-07-12 20:29:46 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:46.082377 | orchestrator | 2025-07-12 20:29:46 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:46.082932 | orchestrator | 2025-07-12 20:29:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:49.126973 | orchestrator | 2025-07-12 20:29:49 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:29:49.129029 | orchestrator | 2025-07-12 20:29:49 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:49.129861 | orchestrator | 2025-07-12 20:29:49 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:49.130649 | orchestrator | 2025-07-12 20:29:49 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:49.132115 | orchestrator | 2025-07-12 20:29:49 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:49.132331 | orchestrator | 2025-07-12 20:29:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:52.172904 | orchestrator | 2025-07-12 20:29:52 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:29:52.173130 | orchestrator | 2025-07-12 20:29:52 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:52.173903 | orchestrator | 2025-07-12 20:29:52 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:52.174629 | orchestrator | 2025-07-12 20:29:52 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:52.175468 | orchestrator | 2025-07-12 20:29:52 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:52.175501 | orchestrator | 2025-07-12 20:29:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:55.215686 | orchestrator | 2025-07-12 20:29:55 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:29:55.216281 | orchestrator | 2025-07-12 20:29:55 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:55.217112 | orchestrator | 2025-07-12 20:29:55 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:55.222301 | orchestrator | 2025-07-12 20:29:55 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:55.222357 | orchestrator | 2025-07-12 20:29:55 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:55.222395 | orchestrator | 2025-07-12 20:29:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:29:58.271103 | orchestrator | 2025-07-12 20:29:58 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:29:58.271833 | orchestrator | 2025-07-12 20:29:58 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:29:58.273301 | orchestrator | 2025-07-12 20:29:58 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:29:58.274165 | orchestrator | 2025-07-12 20:29:58 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:29:58.275529 | orchestrator | 2025-07-12 20:29:58 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:29:58.275832 | orchestrator | 2025-07-12 20:29:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:01.307034 | orchestrator | 2025-07-12 20:30:01 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:01.308584 | orchestrator | 2025-07-12 20:30:01 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:30:01.309926 | orchestrator | 2025-07-12 20:30:01 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:01.311029 | orchestrator | 2025-07-12 20:30:01 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:01.311963 | orchestrator | 2025-07-12 20:30:01 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:01.312005 | orchestrator | 2025-07-12 20:30:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:04.345143 | orchestrator | 2025-07-12 20:30:04 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:04.347143 | orchestrator | 2025-07-12 20:30:04 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:30:04.348840 | orchestrator | 2025-07-12 20:30:04 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:04.350337 | orchestrator | 2025-07-12 20:30:04 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:04.351122 | orchestrator | 2025-07-12 20:30:04 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:04.351247 | orchestrator | 2025-07-12 20:30:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:07.400916 | orchestrator | 2025-07-12 20:30:07 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:07.402560 | orchestrator | 2025-07-12 20:30:07 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state STARTED 2025-07-12 20:30:07.404319 | orchestrator | 2025-07-12 20:30:07 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:07.405838 | orchestrator | 2025-07-12 20:30:07 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:07.407156 | orchestrator | 2025-07-12 20:30:07 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:07.407212 | orchestrator | 2025-07-12 20:30:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:10.447242 | orchestrator | 2025-07-12 20:30:10 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:10.447331 | orchestrator | 2025-07-12 20:30:10 | INFO  | Task bfceb846-9bb5-47a8-a173-d1089310e2fc is in state SUCCESS 2025-07-12 20:30:10.448714 | orchestrator | 2025-07-12 20:30:10.448746 | orchestrator | 2025-07-12 20:30:10.448755 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:30:10.448764 | orchestrator | 2025-07-12 20:30:10.448771 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:30:10.448858 | orchestrator | Saturday 12 July 2025 20:29:03 +0000 (0:00:00.618) 0:00:00.618 ********* 2025-07-12 20:30:10.448870 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:30:10.448879 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:30:10.448886 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:30:10.448894 | orchestrator | ok: [testbed-manager] 2025-07-12 20:30:10.448901 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:30:10.448908 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:30:10.448915 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:30:10.448923 | orchestrator | 2025-07-12 20:30:10.448930 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:30:10.448937 | orchestrator | Saturday 12 July 2025 20:29:05 +0000 (0:00:02.313) 0:00:02.932 ********* 2025-07-12 20:30:10.448945 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-07-12 20:30:10.448965 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-07-12 20:30:10.448973 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-07-12 20:30:10.448990 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-07-12 20:30:10.448997 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-07-12 20:30:10.449005 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-07-12 20:30:10.449012 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-07-12 20:30:10.449019 | orchestrator | 2025-07-12 20:30:10.449026 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-07-12 20:30:10.449033 | orchestrator | 2025-07-12 20:30:10.449041 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-07-12 20:30:10.449048 | orchestrator | Saturday 12 July 2025 20:29:07 +0000 (0:00:01.149) 0:00:04.081 ********* 2025-07-12 20:30:10.449056 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:30:10.449065 | orchestrator | 2025-07-12 20:30:10.449072 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-07-12 20:30:10.449080 | orchestrator | Saturday 12 July 2025 20:29:08 +0000 (0:00:01.798) 0:00:05.879 ********* 2025-07-12 20:30:10.449087 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-07-12 20:30:10.449094 | orchestrator | 2025-07-12 20:30:10.449101 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-07-12 20:30:10.449109 | orchestrator | Saturday 12 July 2025 20:29:12 +0000 (0:00:03.224) 0:00:09.104 ********* 2025-07-12 20:30:10.449142 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-07-12 20:30:10.449153 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-07-12 20:30:10.449160 | orchestrator | 2025-07-12 20:30:10.449167 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-07-12 20:30:10.449175 | orchestrator | Saturday 12 July 2025 20:29:17 +0000 (0:00:05.582) 0:00:14.687 ********* 2025-07-12 20:30:10.449182 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:30:10.449189 | orchestrator | 2025-07-12 20:30:10.449196 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-07-12 20:30:10.449203 | orchestrator | Saturday 12 July 2025 20:29:20 +0000 (0:00:02.890) 0:00:17.577 ********* 2025-07-12 20:30:10.449211 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:30:10.449218 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-07-12 20:30:10.449226 | orchestrator | 2025-07-12 20:30:10.449233 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-07-12 20:30:10.449240 | orchestrator | Saturday 12 July 2025 20:29:23 +0000 (0:00:03.408) 0:00:20.985 ********* 2025-07-12 20:30:10.449247 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:30:10.449254 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-07-12 20:30:10.449261 | orchestrator | 2025-07-12 20:30:10.449268 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-07-12 20:30:10.449275 | orchestrator | Saturday 12 July 2025 20:29:30 +0000 (0:00:06.796) 0:00:27.782 ********* 2025-07-12 20:30:10.449282 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-07-12 20:30:10.449289 | orchestrator | 2025-07-12 20:30:10.449297 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:30:10.449309 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:30:10.449336 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:30:10.449349 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:30:10.449361 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:30:10.449372 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:30:10.449475 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:30:10.449492 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:30:10.449501 | orchestrator | 2025-07-12 20:30:10.449509 | orchestrator | 2025-07-12 20:30:10.449518 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:30:10.449526 | orchestrator | Saturday 12 July 2025 20:29:38 +0000 (0:00:08.104) 0:00:35.886 ********* 2025-07-12 20:30:10.449535 | orchestrator | =============================================================================== 2025-07-12 20:30:10.449543 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 8.10s 2025-07-12 20:30:10.449550 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.80s 2025-07-12 20:30:10.449558 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.58s 2025-07-12 20:30:10.449567 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.41s 2025-07-12 20:30:10.449575 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.22s 2025-07-12 20:30:10.449594 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.89s 2025-07-12 20:30:10.449602 | orchestrator | Group hosts based on Kolla action --------------------------------------- 2.31s 2025-07-12 20:30:10.449610 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.80s 2025-07-12 20:30:10.449619 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.15s 2025-07-12 20:30:10.449627 | orchestrator | 2025-07-12 20:30:10.449635 | orchestrator | 2025-07-12 20:30:10.449643 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-07-12 20:30:10.449651 | orchestrator | 2025-07-12 20:30:10.449659 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-07-12 20:30:10.449668 | orchestrator | Saturday 12 July 2025 20:28:53 +0000 (0:00:00.272) 0:00:00.272 ********* 2025-07-12 20:30:10.449675 | orchestrator | changed: [testbed-manager] 2025-07-12 20:30:10.449683 | orchestrator | 2025-07-12 20:30:10.449692 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-07-12 20:30:10.449700 | orchestrator | Saturday 12 July 2025 20:28:55 +0000 (0:00:02.182) 0:00:02.454 ********* 2025-07-12 20:30:10.449708 | orchestrator | changed: [testbed-manager] 2025-07-12 20:30:10.449716 | orchestrator | 2025-07-12 20:30:10.449724 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-07-12 20:30:10.449732 | orchestrator | Saturday 12 July 2025 20:28:56 +0000 (0:00:01.098) 0:00:03.553 ********* 2025-07-12 20:30:10.449740 | orchestrator | changed: [testbed-manager] 2025-07-12 20:30:10.449748 | orchestrator | 2025-07-12 20:30:10.449756 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-07-12 20:30:10.449764 | orchestrator | Saturday 12 July 2025 20:28:57 +0000 (0:00:01.064) 0:00:04.617 ********* 2025-07-12 20:30:10.449772 | orchestrator | changed: [testbed-manager] 2025-07-12 20:30:10.449780 | orchestrator | 2025-07-12 20:30:10.449788 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-07-12 20:30:10.449900 | orchestrator | Saturday 12 July 2025 20:28:58 +0000 (0:00:01.140) 0:00:05.758 ********* 2025-07-12 20:30:10.449909 | orchestrator | changed: [testbed-manager] 2025-07-12 20:30:10.449916 | orchestrator | 2025-07-12 20:30:10.449923 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-07-12 20:30:10.449931 | orchestrator | Saturday 12 July 2025 20:29:00 +0000 (0:00:01.404) 0:00:07.162 ********* 2025-07-12 20:30:10.449938 | orchestrator | changed: [testbed-manager] 2025-07-12 20:30:10.449945 | orchestrator | 2025-07-12 20:30:10.449952 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-07-12 20:30:10.449960 | orchestrator | Saturday 12 July 2025 20:29:01 +0000 (0:00:01.187) 0:00:08.350 ********* 2025-07-12 20:30:10.449967 | orchestrator | changed: [testbed-manager] 2025-07-12 20:30:10.449974 | orchestrator | 2025-07-12 20:30:10.449981 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-07-12 20:30:10.449988 | orchestrator | Saturday 12 July 2025 20:29:03 +0000 (0:00:02.034) 0:00:10.384 ********* 2025-07-12 20:30:10.449996 | orchestrator | changed: [testbed-manager] 2025-07-12 20:30:10.450003 | orchestrator | 2025-07-12 20:30:10.450166 | orchestrator | TASK [Create admin user] ******************************************************* 2025-07-12 20:30:10.450186 | orchestrator | Saturday 12 July 2025 20:29:04 +0000 (0:00:01.278) 0:00:11.662 ********* 2025-07-12 20:30:10.450194 | orchestrator | changed: [testbed-manager] 2025-07-12 20:30:10.450205 | orchestrator | 2025-07-12 20:30:10.450217 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-07-12 20:30:10.450226 | orchestrator | Saturday 12 July 2025 20:29:45 +0000 (0:00:40.701) 0:00:52.364 ********* 2025-07-12 20:30:10.450233 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:30:10.450240 | orchestrator | 2025-07-12 20:30:10.450247 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 20:30:10.450257 | orchestrator | 2025-07-12 20:30:10.450268 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 20:30:10.450287 | orchestrator | Saturday 12 July 2025 20:29:45 +0000 (0:00:00.210) 0:00:52.575 ********* 2025-07-12 20:30:10.450294 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:30:10.450301 | orchestrator | 2025-07-12 20:30:10.450316 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 20:30:10.450324 | orchestrator | 2025-07-12 20:30:10.450331 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 20:30:10.450338 | orchestrator | Saturday 12 July 2025 20:29:57 +0000 (0:00:11.622) 0:01:04.197 ********* 2025-07-12 20:30:10.450345 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:30:10.450352 | orchestrator | 2025-07-12 20:30:10.450359 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-07-12 20:30:10.450366 | orchestrator | 2025-07-12 20:30:10.450373 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-07-12 20:30:10.450380 | orchestrator | Saturday 12 July 2025 20:30:08 +0000 (0:00:11.224) 0:01:15.422 ********* 2025-07-12 20:30:10.450387 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:30:10.450394 | orchestrator | 2025-07-12 20:30:10.450447 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:30:10.450472 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-07-12 20:30:10.450486 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:30:10.450495 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:30:10.450502 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:30:10.450509 | orchestrator | 2025-07-12 20:30:10.450516 | orchestrator | 2025-07-12 20:30:10.450523 | orchestrator | 2025-07-12 20:30:10.450530 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:30:10.450539 | orchestrator | Saturday 12 July 2025 20:30:09 +0000 (0:00:01.298) 0:01:16.721 ********* 2025-07-12 20:30:10.450547 | orchestrator | =============================================================================== 2025-07-12 20:30:10.450555 | orchestrator | Create admin user ------------------------------------------------------ 40.70s 2025-07-12 20:30:10.450563 | orchestrator | Restart ceph manager service ------------------------------------------- 24.15s 2025-07-12 20:30:10.450571 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.18s 2025-07-12 20:30:10.450579 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.03s 2025-07-12 20:30:10.450587 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.40s 2025-07-12 20:30:10.450595 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.28s 2025-07-12 20:30:10.450603 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.19s 2025-07-12 20:30:10.450611 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.14s 2025-07-12 20:30:10.450620 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.10s 2025-07-12 20:30:10.450628 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.06s 2025-07-12 20:30:10.450636 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.21s 2025-07-12 20:30:10.450643 | orchestrator | 2025-07-12 20:30:10 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:10.450651 | orchestrator | 2025-07-12 20:30:10 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:10.450659 | orchestrator | 2025-07-12 20:30:10 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:10.450666 | orchestrator | 2025-07-12 20:30:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:13.497389 | orchestrator | 2025-07-12 20:30:13 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:13.498593 | orchestrator | 2025-07-12 20:30:13 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:13.498640 | orchestrator | 2025-07-12 20:30:13 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:13.499374 | orchestrator | 2025-07-12 20:30:13 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:13.499396 | orchestrator | 2025-07-12 20:30:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:16.540962 | orchestrator | 2025-07-12 20:30:16 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:16.542320 | orchestrator | 2025-07-12 20:30:16 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:16.544983 | orchestrator | 2025-07-12 20:30:16 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:16.545635 | orchestrator | 2025-07-12 20:30:16 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:16.545676 | orchestrator | 2025-07-12 20:30:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:19.596920 | orchestrator | 2025-07-12 20:30:19 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:19.597062 | orchestrator | 2025-07-12 20:30:19 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:19.597106 | orchestrator | 2025-07-12 20:30:19 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:19.597836 | orchestrator | 2025-07-12 20:30:19 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:19.597873 | orchestrator | 2025-07-12 20:30:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:22.637277 | orchestrator | 2025-07-12 20:30:22 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:22.637414 | orchestrator | 2025-07-12 20:30:22 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:22.637782 | orchestrator | 2025-07-12 20:30:22 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:22.638567 | orchestrator | 2025-07-12 20:30:22 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:22.638594 | orchestrator | 2025-07-12 20:30:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:25.676941 | orchestrator | 2025-07-12 20:30:25 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:25.677541 | orchestrator | 2025-07-12 20:30:25 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:25.677824 | orchestrator | 2025-07-12 20:30:25 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:25.679216 | orchestrator | 2025-07-12 20:30:25 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:25.679247 | orchestrator | 2025-07-12 20:30:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:28.724897 | orchestrator | 2025-07-12 20:30:28 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:28.727720 | orchestrator | 2025-07-12 20:30:28 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:28.729370 | orchestrator | 2025-07-12 20:30:28 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:28.731053 | orchestrator | 2025-07-12 20:30:28 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:28.731083 | orchestrator | 2025-07-12 20:30:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:31.781300 | orchestrator | 2025-07-12 20:30:31 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:31.781419 | orchestrator | 2025-07-12 20:30:31 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:31.782539 | orchestrator | 2025-07-12 20:30:31 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:31.783272 | orchestrator | 2025-07-12 20:30:31 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:31.783322 | orchestrator | 2025-07-12 20:30:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:34.824942 | orchestrator | 2025-07-12 20:30:34 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:34.832649 | orchestrator | 2025-07-12 20:30:34 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:34.833055 | orchestrator | 2025-07-12 20:30:34 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:34.833984 | orchestrator | 2025-07-12 20:30:34 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:34.835561 | orchestrator | 2025-07-12 20:30:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:37.870934 | orchestrator | 2025-07-12 20:30:37 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:37.872188 | orchestrator | 2025-07-12 20:30:37 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:37.874222 | orchestrator | 2025-07-12 20:30:37 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:37.875673 | orchestrator | 2025-07-12 20:30:37 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:37.875702 | orchestrator | 2025-07-12 20:30:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:40.917175 | orchestrator | 2025-07-12 20:30:40 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:40.917511 | orchestrator | 2025-07-12 20:30:40 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:40.918298 | orchestrator | 2025-07-12 20:30:40 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:40.919507 | orchestrator | 2025-07-12 20:30:40 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:40.919547 | orchestrator | 2025-07-12 20:30:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:43.965966 | orchestrator | 2025-07-12 20:30:43 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:43.968054 | orchestrator | 2025-07-12 20:30:43 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:43.970161 | orchestrator | 2025-07-12 20:30:43 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:43.971603 | orchestrator | 2025-07-12 20:30:43 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:43.971648 | orchestrator | 2025-07-12 20:30:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:46.999226 | orchestrator | 2025-07-12 20:30:46 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:46.999327 | orchestrator | 2025-07-12 20:30:46 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:46.999382 | orchestrator | 2025-07-12 20:30:46 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:46.999692 | orchestrator | 2025-07-12 20:30:46 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:46.999722 | orchestrator | 2025-07-12 20:30:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:50.053240 | orchestrator | 2025-07-12 20:30:50 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:50.053352 | orchestrator | 2025-07-12 20:30:50 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:50.056011 | orchestrator | 2025-07-12 20:30:50 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:50.058151 | orchestrator | 2025-07-12 20:30:50 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:50.058180 | orchestrator | 2025-07-12 20:30:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:53.094847 | orchestrator | 2025-07-12 20:30:53 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:53.094962 | orchestrator | 2025-07-12 20:30:53 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:53.094982 | orchestrator | 2025-07-12 20:30:53 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:53.095684 | orchestrator | 2025-07-12 20:30:53 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:53.095735 | orchestrator | 2025-07-12 20:30:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:56.143298 | orchestrator | 2025-07-12 20:30:56 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:56.144774 | orchestrator | 2025-07-12 20:30:56 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:56.146393 | orchestrator | 2025-07-12 20:30:56 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:56.147306 | orchestrator | 2025-07-12 20:30:56 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:56.147330 | orchestrator | 2025-07-12 20:30:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:30:59.195403 | orchestrator | 2025-07-12 20:30:59 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:30:59.195920 | orchestrator | 2025-07-12 20:30:59 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:30:59.200039 | orchestrator | 2025-07-12 20:30:59 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:30:59.200110 | orchestrator | 2025-07-12 20:30:59 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:30:59.200126 | orchestrator | 2025-07-12 20:30:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:02.231744 | orchestrator | 2025-07-12 20:31:02 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:02.234310 | orchestrator | 2025-07-12 20:31:02 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:02.239992 | orchestrator | 2025-07-12 20:31:02 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:02.240797 | orchestrator | 2025-07-12 20:31:02 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:02.241152 | orchestrator | 2025-07-12 20:31:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:05.287029 | orchestrator | 2025-07-12 20:31:05 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:05.289152 | orchestrator | 2025-07-12 20:31:05 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:05.296477 | orchestrator | 2025-07-12 20:31:05 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:05.297348 | orchestrator | 2025-07-12 20:31:05 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:05.297366 | orchestrator | 2025-07-12 20:31:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:08.340992 | orchestrator | 2025-07-12 20:31:08 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:08.342352 | orchestrator | 2025-07-12 20:31:08 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:08.347355 | orchestrator | 2025-07-12 20:31:08 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:08.350828 | orchestrator | 2025-07-12 20:31:08 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:08.350889 | orchestrator | 2025-07-12 20:31:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:11.382963 | orchestrator | 2025-07-12 20:31:11 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:11.383360 | orchestrator | 2025-07-12 20:31:11 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:11.384823 | orchestrator | 2025-07-12 20:31:11 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:11.385372 | orchestrator | 2025-07-12 20:31:11 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:11.385400 | orchestrator | 2025-07-12 20:31:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:14.431904 | orchestrator | 2025-07-12 20:31:14 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:14.433151 | orchestrator | 2025-07-12 20:31:14 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:14.435383 | orchestrator | 2025-07-12 20:31:14 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:14.436764 | orchestrator | 2025-07-12 20:31:14 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:14.436857 | orchestrator | 2025-07-12 20:31:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:17.489185 | orchestrator | 2025-07-12 20:31:17 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:17.494143 | orchestrator | 2025-07-12 20:31:17 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:17.497772 | orchestrator | 2025-07-12 20:31:17 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:17.502217 | orchestrator | 2025-07-12 20:31:17 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:17.502290 | orchestrator | 2025-07-12 20:31:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:20.544661 | orchestrator | 2025-07-12 20:31:20 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:20.545362 | orchestrator | 2025-07-12 20:31:20 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:20.548907 | orchestrator | 2025-07-12 20:31:20 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:20.549321 | orchestrator | 2025-07-12 20:31:20 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:20.549369 | orchestrator | 2025-07-12 20:31:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:23.589128 | orchestrator | 2025-07-12 20:31:23 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:23.591013 | orchestrator | 2025-07-12 20:31:23 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:23.594517 | orchestrator | 2025-07-12 20:31:23 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:23.594603 | orchestrator | 2025-07-12 20:31:23 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:23.594616 | orchestrator | 2025-07-12 20:31:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:26.635317 | orchestrator | 2025-07-12 20:31:26 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:26.636876 | orchestrator | 2025-07-12 20:31:26 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:26.639647 | orchestrator | 2025-07-12 20:31:26 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:26.641198 | orchestrator | 2025-07-12 20:31:26 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:26.641221 | orchestrator | 2025-07-12 20:31:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:29.693108 | orchestrator | 2025-07-12 20:31:29 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:29.694599 | orchestrator | 2025-07-12 20:31:29 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:29.699525 | orchestrator | 2025-07-12 20:31:29 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:29.701950 | orchestrator | 2025-07-12 20:31:29 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:29.702577 | orchestrator | 2025-07-12 20:31:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:32.753675 | orchestrator | 2025-07-12 20:31:32 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:32.753905 | orchestrator | 2025-07-12 20:31:32 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:32.754726 | orchestrator | 2025-07-12 20:31:32 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:32.755514 | orchestrator | 2025-07-12 20:31:32 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:32.756732 | orchestrator | 2025-07-12 20:31:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:35.798741 | orchestrator | 2025-07-12 20:31:35 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:35.800971 | orchestrator | 2025-07-12 20:31:35 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:35.803104 | orchestrator | 2025-07-12 20:31:35 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:35.804845 | orchestrator | 2025-07-12 20:31:35 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:35.805091 | orchestrator | 2025-07-12 20:31:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:38.854765 | orchestrator | 2025-07-12 20:31:38 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:38.858078 | orchestrator | 2025-07-12 20:31:38 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:38.859106 | orchestrator | 2025-07-12 20:31:38 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:38.862582 | orchestrator | 2025-07-12 20:31:38 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:38.862751 | orchestrator | 2025-07-12 20:31:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:41.907206 | orchestrator | 2025-07-12 20:31:41 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:41.909093 | orchestrator | 2025-07-12 20:31:41 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:41.910346 | orchestrator | 2025-07-12 20:31:41 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:41.910842 | orchestrator | 2025-07-12 20:31:41 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:41.912235 | orchestrator | 2025-07-12 20:31:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:44.966230 | orchestrator | 2025-07-12 20:31:44 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:44.968996 | orchestrator | 2025-07-12 20:31:44 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:44.969412 | orchestrator | 2025-07-12 20:31:44 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:44.970344 | orchestrator | 2025-07-12 20:31:44 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:44.970391 | orchestrator | 2025-07-12 20:31:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:48.004367 | orchestrator | 2025-07-12 20:31:48 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:48.007881 | orchestrator | 2025-07-12 20:31:48 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:48.009104 | orchestrator | 2025-07-12 20:31:48 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:48.012480 | orchestrator | 2025-07-12 20:31:48 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:48.012554 | orchestrator | 2025-07-12 20:31:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:51.072050 | orchestrator | 2025-07-12 20:31:51 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:51.072212 | orchestrator | 2025-07-12 20:31:51 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:51.072994 | orchestrator | 2025-07-12 20:31:51 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:51.073829 | orchestrator | 2025-07-12 20:31:51 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:51.073879 | orchestrator | 2025-07-12 20:31:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:54.124130 | orchestrator | 2025-07-12 20:31:54 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:54.124237 | orchestrator | 2025-07-12 20:31:54 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:54.125851 | orchestrator | 2025-07-12 20:31:54 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:54.127111 | orchestrator | 2025-07-12 20:31:54 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:54.127154 | orchestrator | 2025-07-12 20:31:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:31:57.177809 | orchestrator | 2025-07-12 20:31:57 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:31:57.178290 | orchestrator | 2025-07-12 20:31:57 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:31:57.179297 | orchestrator | 2025-07-12 20:31:57 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:31:57.181290 | orchestrator | 2025-07-12 20:31:57 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:31:57.181309 | orchestrator | 2025-07-12 20:31:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:00.246453 | orchestrator | 2025-07-12 20:32:00 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:00.247869 | orchestrator | 2025-07-12 20:32:00 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:32:00.249696 | orchestrator | 2025-07-12 20:32:00 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:00.253393 | orchestrator | 2025-07-12 20:32:00 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:00.253431 | orchestrator | 2025-07-12 20:32:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:03.297445 | orchestrator | 2025-07-12 20:32:03 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:03.298194 | orchestrator | 2025-07-12 20:32:03 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:32:03.299069 | orchestrator | 2025-07-12 20:32:03 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:03.300360 | orchestrator | 2025-07-12 20:32:03 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:03.300398 | orchestrator | 2025-07-12 20:32:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:06.357861 | orchestrator | 2025-07-12 20:32:06 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:06.362186 | orchestrator | 2025-07-12 20:32:06 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:32:06.363998 | orchestrator | 2025-07-12 20:32:06 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:06.366691 | orchestrator | 2025-07-12 20:32:06 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:06.366741 | orchestrator | 2025-07-12 20:32:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:09.403954 | orchestrator | 2025-07-12 20:32:09 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:09.404353 | orchestrator | 2025-07-12 20:32:09 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:32:09.405682 | orchestrator | 2025-07-12 20:32:09 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:09.406669 | orchestrator | 2025-07-12 20:32:09 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:09.406719 | orchestrator | 2025-07-12 20:32:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:12.458380 | orchestrator | 2025-07-12 20:32:12 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:12.461173 | orchestrator | 2025-07-12 20:32:12 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:32:12.464400 | orchestrator | 2025-07-12 20:32:12 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:12.466802 | orchestrator | 2025-07-12 20:32:12 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:12.466839 | orchestrator | 2025-07-12 20:32:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:15.528702 | orchestrator | 2025-07-12 20:32:15 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:15.534587 | orchestrator | 2025-07-12 20:32:15 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:32:15.538314 | orchestrator | 2025-07-12 20:32:15 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:15.542456 | orchestrator | 2025-07-12 20:32:15 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:15.545802 | orchestrator | 2025-07-12 20:32:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:18.585165 | orchestrator | 2025-07-12 20:32:18 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:18.588449 | orchestrator | 2025-07-12 20:32:18 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:32:18.591164 | orchestrator | 2025-07-12 20:32:18 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:18.592256 | orchestrator | 2025-07-12 20:32:18 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:18.592307 | orchestrator | 2025-07-12 20:32:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:21.637936 | orchestrator | 2025-07-12 20:32:21 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:21.639576 | orchestrator | 2025-07-12 20:32:21 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:32:21.642369 | orchestrator | 2025-07-12 20:32:21 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:21.644643 | orchestrator | 2025-07-12 20:32:21 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:21.644672 | orchestrator | 2025-07-12 20:32:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:24.682237 | orchestrator | 2025-07-12 20:32:24 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:24.684723 | orchestrator | 2025-07-12 20:32:24 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state STARTED 2025-07-12 20:32:24.686342 | orchestrator | 2025-07-12 20:32:24 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:24.687332 | orchestrator | 2025-07-12 20:32:24 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:24.687563 | orchestrator | 2025-07-12 20:32:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:27.736509 | orchestrator | 2025-07-12 20:32:27 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:27.738229 | orchestrator | 2025-07-12 20:32:27 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:27.740964 | orchestrator | 2025-07-12 20:32:27.741044 | orchestrator | 2025-07-12 20:32:27 | INFO  | Task 3b8066d8-1fb7-45b4-9983-93cfedac73e6 is in state SUCCESS 2025-07-12 20:32:27.743034 | orchestrator | 2025-07-12 20:32:27.743107 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:32:27.743122 | orchestrator | 2025-07-12 20:32:27.743150 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:32:27.743163 | orchestrator | Saturday 12 July 2025 20:29:02 +0000 (0:00:00.289) 0:00:00.289 ********* 2025-07-12 20:32:27.743174 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:32:27.743186 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:32:27.743224 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:32:27.743236 | orchestrator | 2025-07-12 20:32:27.743247 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:32:27.743257 | orchestrator | Saturday 12 July 2025 20:29:02 +0000 (0:00:00.314) 0:00:00.604 ********* 2025-07-12 20:32:27.743268 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-07-12 20:32:27.743279 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-07-12 20:32:27.743290 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-07-12 20:32:27.743300 | orchestrator | 2025-07-12 20:32:27.743311 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-07-12 20:32:27.743322 | orchestrator | 2025-07-12 20:32:27.743332 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 20:32:27.743343 | orchestrator | Saturday 12 July 2025 20:29:03 +0000 (0:00:00.868) 0:00:01.473 ********* 2025-07-12 20:32:27.743354 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:32:27.743365 | orchestrator | 2025-07-12 20:32:27.743376 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-07-12 20:32:27.743387 | orchestrator | Saturday 12 July 2025 20:29:05 +0000 (0:00:01.531) 0:00:03.004 ********* 2025-07-12 20:32:27.743399 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-07-12 20:32:27.743410 | orchestrator | 2025-07-12 20:32:27.743420 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-07-12 20:32:27.743431 | orchestrator | Saturday 12 July 2025 20:29:09 +0000 (0:00:04.189) 0:00:07.194 ********* 2025-07-12 20:32:27.743442 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-07-12 20:32:27.743453 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-07-12 20:32:27.743464 | orchestrator | 2025-07-12 20:32:27.743475 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-07-12 20:32:27.743486 | orchestrator | Saturday 12 July 2025 20:29:15 +0000 (0:00:05.690) 0:00:12.885 ********* 2025-07-12 20:32:27.743497 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-07-12 20:32:27.743507 | orchestrator | 2025-07-12 20:32:27.743518 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-07-12 20:32:27.743529 | orchestrator | Saturday 12 July 2025 20:29:18 +0000 (0:00:03.415) 0:00:16.301 ********* 2025-07-12 20:32:27.743541 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:32:27.743552 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-07-12 20:32:27.743563 | orchestrator | 2025-07-12 20:32:27.743573 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-07-12 20:32:27.743584 | orchestrator | Saturday 12 July 2025 20:29:21 +0000 (0:00:03.402) 0:00:19.703 ********* 2025-07-12 20:32:27.743595 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:32:27.743606 | orchestrator | 2025-07-12 20:32:27.743617 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-07-12 20:32:27.743673 | orchestrator | Saturday 12 July 2025 20:29:25 +0000 (0:00:03.482) 0:00:23.186 ********* 2025-07-12 20:32:27.743686 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-07-12 20:32:27.743698 | orchestrator | 2025-07-12 20:32:27.743710 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-07-12 20:32:27.743722 | orchestrator | Saturday 12 July 2025 20:29:29 +0000 (0:00:03.910) 0:00:27.096 ********* 2025-07-12 20:32:27.743767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.743798 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.743812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.743832 | orchestrator | 2025-07-12 20:32:27.743843 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 20:32:27.743855 | orchestrator | Saturday 12 July 2025 20:29:39 +0000 (0:00:10.094) 0:00:37.190 ********* 2025-07-12 20:32:27.743872 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:32:27.743884 | orchestrator | 2025-07-12 20:32:27.743895 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-07-12 20:32:27.743911 | orchestrator | Saturday 12 July 2025 20:29:41 +0000 (0:00:02.049) 0:00:39.240 ********* 2025-07-12 20:32:27.743922 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:32:27.743932 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:27.743943 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:32:27.743954 | orchestrator | 2025-07-12 20:32:27.743964 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-07-12 20:32:27.743975 | orchestrator | Saturday 12 July 2025 20:29:48 +0000 (0:00:06.749) 0:00:45.989 ********* 2025-07-12 20:32:27.743985 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:32:27.743997 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:32:27.744008 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:32:27.744018 | orchestrator | 2025-07-12 20:32:27.744029 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-07-12 20:32:27.744040 | orchestrator | Saturday 12 July 2025 20:29:50 +0000 (0:00:02.045) 0:00:48.034 ********* 2025-07-12 20:32:27.744050 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:32:27.744061 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:32:27.744072 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:32:27.744083 | orchestrator | 2025-07-12 20:32:27.744094 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-07-12 20:32:27.744105 | orchestrator | Saturday 12 July 2025 20:29:51 +0000 (0:00:01.259) 0:00:49.294 ********* 2025-07-12 20:32:27.744115 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:32:27.744126 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:32:27.744136 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:32:27.744147 | orchestrator | 2025-07-12 20:32:27.744158 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-07-12 20:32:27.744169 | orchestrator | Saturday 12 July 2025 20:29:52 +0000 (0:00:01.171) 0:00:50.465 ********* 2025-07-12 20:32:27.744179 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.744190 | orchestrator | 2025-07-12 20:32:27.744201 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-07-12 20:32:27.744211 | orchestrator | Saturday 12 July 2025 20:29:52 +0000 (0:00:00.220) 0:00:50.685 ********* 2025-07-12 20:32:27.744222 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.744233 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.744250 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.744261 | orchestrator | 2025-07-12 20:32:27.744271 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 20:32:27.744282 | orchestrator | Saturday 12 July 2025 20:29:53 +0000 (0:00:00.345) 0:00:51.031 ********* 2025-07-12 20:32:27.744293 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:32:27.744304 | orchestrator | 2025-07-12 20:32:27.744314 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-07-12 20:32:27.744325 | orchestrator | Saturday 12 July 2025 20:29:53 +0000 (0:00:00.594) 0:00:51.625 ********* 2025-07-12 20:32:27.744343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.744370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.744390 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.744401 | orchestrator | 2025-07-12 20:32:27.744412 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-07-12 20:32:27.744423 | orchestrator | Saturday 12 July 2025 20:30:00 +0000 (0:00:06.821) 0:00:58.446 ********* 2025-07-12 20:32:27.744449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:32:27.744463 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.744475 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:32:27.744493 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.744518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:32:27.744532 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.744543 | orchestrator | 2025-07-12 20:32:27.744554 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-07-12 20:32:27.744564 | orchestrator | Saturday 12 July 2025 20:30:05 +0000 (0:00:05.063) 0:01:03.510 ********* 2025-07-12 20:32:27.744576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:32:27.744594 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.744616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:32:27.744700 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.744714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-07-12 20:32:27.744733 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.744743 | orchestrator | 2025-07-12 20:32:27.744754 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-07-12 20:32:27.744765 | orchestrator | Saturday 12 July 2025 20:30:12 +0000 (0:00:06.235) 0:01:09.746 ********* 2025-07-12 20:32:27.744776 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.744786 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.744797 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.744808 | orchestrator | 2025-07-12 20:32:27.744819 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-07-12 20:32:27.744829 | orchestrator | Saturday 12 July 2025 20:30:17 +0000 (0:00:05.510) 0:01:15.256 ********* 2025-07-12 20:32:27.744855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.744869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.744888 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.744900 | orchestrator | 2025-07-12 20:32:27.744911 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-07-12 20:32:27.744922 | orchestrator | Saturday 12 July 2025 20:30:25 +0000 (0:00:07.630) 0:01:22.887 ********* 2025-07-12 20:32:27.744933 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:32:27.744943 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:32:27.744954 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:27.744965 | orchestrator | 2025-07-12 20:32:27.744975 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-07-12 20:32:27.744991 | orchestrator | Saturday 12 July 2025 20:30:36 +0000 (0:00:11.454) 0:01:34.341 ********* 2025-07-12 20:32:27.745003 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.745013 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.745028 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.745039 | orchestrator | 2025-07-12 20:32:27.745050 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-07-12 20:32:27.745061 | orchestrator | Saturday 12 July 2025 20:30:44 +0000 (0:00:07.633) 0:01:41.974 ********* 2025-07-12 20:32:27.745071 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.745088 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.745099 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.745110 | orchestrator | 2025-07-12 20:32:27.745121 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-07-12 20:32:27.745131 | orchestrator | Saturday 12 July 2025 20:30:49 +0000 (0:00:05.227) 0:01:47.202 ********* 2025-07-12 20:32:27.745142 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.745153 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.745164 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.745174 | orchestrator | 2025-07-12 20:32:27.745185 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-07-12 20:32:27.745195 | orchestrator | Saturday 12 July 2025 20:30:53 +0000 (0:00:04.200) 0:01:51.403 ********* 2025-07-12 20:32:27.745206 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.745216 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.745226 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.745235 | orchestrator | 2025-07-12 20:32:27.745245 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-07-12 20:32:27.745255 | orchestrator | Saturday 12 July 2025 20:30:58 +0000 (0:00:04.715) 0:01:56.118 ********* 2025-07-12 20:32:27.745264 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.745274 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.745283 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.745292 | orchestrator | 2025-07-12 20:32:27.745302 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-07-12 20:32:27.745311 | orchestrator | Saturday 12 July 2025 20:30:58 +0000 (0:00:00.330) 0:01:56.448 ********* 2025-07-12 20:32:27.745321 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 20:32:27.745331 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.745340 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 20:32:27.745350 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.745360 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-07-12 20:32:27.745370 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.745379 | orchestrator | 2025-07-12 20:32:27.745389 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-07-12 20:32:27.745399 | orchestrator | Saturday 12 July 2025 20:31:02 +0000 (0:00:03.483) 0:01:59.931 ********* 2025-07-12 20:32:27.745409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.745439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.745452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:29.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-07-12 20:32:27.745462 | orchestrator | 2025-07-12 20:32:27.745472 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-07-12 20:32:27.745481 | orchestrator | Saturday 12 July 2025 20:31:06 +0000 (0:00:04.469) 0:02:04.401 ********* 2025-07-12 20:32:27.745490 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:27.745506 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:27.745516 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:27.745526 | orchestrator | 2025-07-12 20:32:27.745535 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-07-12 20:32:27.745545 | orchestrator | Saturday 12 July 2025 20:31:06 +0000 (0:00:00.303) 0:02:04.704 ********* 2025-07-12 20:32:27.745554 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:27.745564 | orchestrator | 2025-07-12 20:32:27.745573 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-07-12 20:32:27.745583 | orchestrator | Saturday 12 July 2025 20:31:08 +0000 (0:00:01.793) 0:02:06.498 ********* 2025-07-12 20:32:27.745593 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:27.745692 | orchestrator | 2025-07-12 20:32:27.745709 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-07-12 20:32:27.745726 | orchestrator | Saturday 12 July 2025 20:31:10 +0000 (0:00:02.090) 0:02:08.588 ********* 2025-07-12 20:32:27.745745 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:27.745762 | orchestrator | 2025-07-12 20:32:27.745779 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-07-12 20:32:27.745797 | orchestrator | Saturday 12 July 2025 20:31:12 +0000 (0:00:02.101) 0:02:10.690 ********* 2025-07-12 20:32:27.745807 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:27.745817 | orchestrator | 2025-07-12 20:32:27.745835 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-07-12 20:32:27.745845 | orchestrator | Saturday 12 July 2025 20:31:36 +0000 (0:00:24.037) 0:02:34.727 ********* 2025-07-12 20:32:27.745855 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:27.745865 | orchestrator | 2025-07-12 20:32:27.745874 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 20:32:27.745884 | orchestrator | Saturday 12 July 2025 20:31:39 +0000 (0:00:02.198) 0:02:36.926 ********* 2025-07-12 20:32:27.745893 | orchestrator | 2025-07-12 20:32:27.745903 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 20:32:27.745912 | orchestrator | Saturday 12 July 2025 20:31:39 +0000 (0:00:00.066) 0:02:36.992 ********* 2025-07-12 20:32:27.745922 | orchestrator | 2025-07-12 20:32:27.745931 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-07-12 20:32:27.745941 | orchestrator | Saturday 12 July 2025 20:31:39 +0000 (0:00:00.064) 0:02:37.057 ********* 2025-07-12 20:32:27.745950 | orchestrator | 2025-07-12 20:32:27.745960 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-07-12 20:32:27.745969 | orchestrator | Saturday 12 July 2025 20:31:39 +0000 (0:00:00.065) 0:02:37.123 ********* 2025-07-12 20:32:27.745979 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:27.745989 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:32:27.745998 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:32:27.746008 | orchestrator | 2025-07-12 20:32:27.746070 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:32:27.746082 | orchestrator | testbed-node-0 : ok=26  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 20:32:27.746094 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:32:27.746103 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:32:27.746113 | orchestrator | 2025-07-12 20:32:27.746122 | orchestrator | 2025-07-12 20:32:27.746132 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:32:27.746141 | orchestrator | Saturday 12 July 2025 20:32:24 +0000 (0:00:45.370) 0:03:22.494 ********* 2025-07-12 20:32:27.746151 | orchestrator | =============================================================================== 2025-07-12 20:32:27.746160 | orchestrator | glance : Restart glance-api container ---------------------------------- 45.37s 2025-07-12 20:32:27.746179 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 24.04s 2025-07-12 20:32:27.746189 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 11.45s 2025-07-12 20:32:27.746198 | orchestrator | glance : Ensuring config directories exist ----------------------------- 10.09s 2025-07-12 20:32:27.746208 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 7.63s 2025-07-12 20:32:27.746217 | orchestrator | glance : Copying over config.json files for services -------------------- 7.63s 2025-07-12 20:32:27.746227 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 6.82s 2025-07-12 20:32:27.746237 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.75s 2025-07-12 20:32:27.746246 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 6.24s 2025-07-12 20:32:27.746256 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 5.69s 2025-07-12 20:32:27.746266 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 5.51s 2025-07-12 20:32:27.746276 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.23s 2025-07-12 20:32:27.746285 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 5.06s 2025-07-12 20:32:27.746295 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 4.72s 2025-07-12 20:32:27.746304 | orchestrator | glance : Check glance containers ---------------------------------------- 4.47s 2025-07-12 20:32:27.746314 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.20s 2025-07-12 20:32:27.746323 | orchestrator | service-ks-register : glance | Creating services ------------------------ 4.19s 2025-07-12 20:32:27.746333 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 3.91s 2025-07-12 20:32:27.746342 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.48s 2025-07-12 20:32:27.746352 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.48s 2025-07-12 20:32:27.746367 | orchestrator | 2025-07-12 20:32:27 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:27.746842 | orchestrator | 2025-07-12 20:32:27 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:27.746900 | orchestrator | 2025-07-12 20:32:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:30.799551 | orchestrator | 2025-07-12 20:32:30 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:30.799720 | orchestrator | 2025-07-12 20:32:30 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:30.799739 | orchestrator | 2025-07-12 20:32:30 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:30.800951 | orchestrator | 2025-07-12 20:32:30 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:30.800990 | orchestrator | 2025-07-12 20:32:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:33.853698 | orchestrator | 2025-07-12 20:32:33 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:33.856003 | orchestrator | 2025-07-12 20:32:33 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:33.857928 | orchestrator | 2025-07-12 20:32:33 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:33.859907 | orchestrator | 2025-07-12 20:32:33 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:33.859950 | orchestrator | 2025-07-12 20:32:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:36.913659 | orchestrator | 2025-07-12 20:32:36 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:36.924897 | orchestrator | 2025-07-12 20:32:36 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:36.928483 | orchestrator | 2025-07-12 20:32:36 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:36.929576 | orchestrator | 2025-07-12 20:32:36 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:36.929839 | orchestrator | 2025-07-12 20:32:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:39.973824 | orchestrator | 2025-07-12 20:32:39 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:39.974872 | orchestrator | 2025-07-12 20:32:39 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:39.975777 | orchestrator | 2025-07-12 20:32:39 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state STARTED 2025-07-12 20:32:39.977304 | orchestrator | 2025-07-12 20:32:39 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:39.977328 | orchestrator | 2025-07-12 20:32:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:43.019388 | orchestrator | 2025-07-12 20:32:43 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:43.022124 | orchestrator | 2025-07-12 20:32:43 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:43.026257 | orchestrator | 2025-07-12 20:32:43 | INFO  | Task 2e747034-81f6-4d53-b5fa-74e964780982 is in state SUCCESS 2025-07-12 20:32:43.027791 | orchestrator | 2025-07-12 20:32:43.028215 | orchestrator | 2025-07-12 20:32:43.028250 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:32:43.028270 | orchestrator | 2025-07-12 20:32:43.028288 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:32:43.028308 | orchestrator | Saturday 12 July 2025 20:28:53 +0000 (0:00:00.276) 0:00:00.276 ********* 2025-07-12 20:32:43.028365 | orchestrator | ok: [testbed-manager] 2025-07-12 20:32:43.028387 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:32:43.028405 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:32:43.028425 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:32:43.028442 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:32:43.028457 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:32:43.028469 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:32:43.028480 | orchestrator | 2025-07-12 20:32:43.028491 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:32:43.028503 | orchestrator | Saturday 12 July 2025 20:28:54 +0000 (0:00:00.827) 0:00:01.104 ********* 2025-07-12 20:32:43.028515 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-07-12 20:32:43.028528 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-07-12 20:32:43.028539 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-07-12 20:32:43.028550 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-07-12 20:32:43.028561 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-07-12 20:32:43.028572 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-07-12 20:32:43.028583 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-07-12 20:32:43.028595 | orchestrator | 2025-07-12 20:32:43.028680 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-07-12 20:32:43.028695 | orchestrator | 2025-07-12 20:32:43.028706 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-12 20:32:43.028715 | orchestrator | Saturday 12 July 2025 20:28:54 +0000 (0:00:00.754) 0:00:01.858 ********* 2025-07-12 20:32:43.028727 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:32:43.028847 | orchestrator | 2025-07-12 20:32:43.028859 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-07-12 20:32:43.028869 | orchestrator | Saturday 12 July 2025 20:28:56 +0000 (0:00:02.084) 0:00:03.943 ********* 2025-07-12 20:32:43.028897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.028912 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:32:43.028923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.028934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.028966 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.028979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.028989 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029048 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029058 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029087 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029119 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029163 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:32:43.029177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029307 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029325 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029336 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029346 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029385 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029413 | orchestrator | 2025-07-12 20:32:43.029423 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-07-12 20:32:43.029433 | orchestrator | Saturday 12 July 2025 20:29:00 +0000 (0:00:04.061) 0:00:08.005 ********* 2025-07-12 20:32:43.029542 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:32:43.029558 | orchestrator | 2025-07-12 20:32:43.029641 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-07-12 20:32:43.029699 | orchestrator | Saturday 12 July 2025 20:29:02 +0000 (0:00:01.990) 0:00:09.995 ********* 2025-07-12 20:32:43.029712 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:32:43.029723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029773 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029783 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029829 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.029840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.029907 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.029984 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.030002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.030191 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.030222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.030261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.030286 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:32:43.030314 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.030325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.030342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.030353 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.030363 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.030373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.031184 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.031247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.031257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.031264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.031272 | orchestrator | 2025-07-12 20:32:43.031280 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-07-12 20:32:43.031295 | orchestrator | Saturday 12 July 2025 20:29:09 +0000 (0:00:07.062) 0:00:17.058 ********* 2025-07-12 20:32:43.031303 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 20:32:43.031312 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031320 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031361 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 20:32:43.031372 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031429 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:32:43.031442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031476 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031483 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.031490 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.031497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031542 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.031549 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031559 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031579 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.031586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031606 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031613 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.031620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031628 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031691 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.031703 | orchestrator | 2025-07-12 20:32:43.031715 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-07-12 20:32:43.031726 | orchestrator | Saturday 12 July 2025 20:29:11 +0000 (0:00:01.942) 0:00:19.000 ********* 2025-07-12 20:32:43.031736 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-07-12 20:32:43.031755 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031763 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031779 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-07-12 20:32:43.031788 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031796 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:32:43.031809 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031839 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031860 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.031869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031877 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031919 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.031927 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.031965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-07-12 20:32:43.031972 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.031984 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.031997 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.032006 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.032014 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.032023 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.032035 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.032044 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.032052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-07-12 20:32:43.032060 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.032068 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.032088 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-07-12 20:32:43.032096 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.032103 | orchestrator | 2025-07-12 20:32:43.032111 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-07-12 20:32:43.032119 | orchestrator | Saturday 12 July 2025 20:29:14 +0000 (0:00:02.317) 0:00:21.318 ********* 2025-07-12 20:32:43.032128 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:32:43.032137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.032148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.032156 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.032163 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.032174 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.032184 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.032191 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.032198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.032205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.032218 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.032225 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.032232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.032247 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.032255 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.032262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.032269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.032276 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.032288 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.032296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.032312 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:32:43.032320 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.032327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.032334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.032346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.032353 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.032365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.032372 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.032383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.032390 | orchestrator | 2025-07-12 20:32:43.032397 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-07-12 20:32:43.032404 | orchestrator | Saturday 12 July 2025 20:29:20 +0000 (0:00:05.884) 0:00:27.202 ********* 2025-07-12 20:32:43.032411 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:32:43.032418 | orchestrator | 2025-07-12 20:32:43.032425 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-07-12 20:32:43.032431 | orchestrator | Saturday 12 July 2025 20:29:21 +0000 (0:00:00.944) 0:00:28.147 ********* 2025-07-12 20:32:43.032438 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569495, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0041313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032446 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569495, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0041313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032458 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569495, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0041313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032471 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569495, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0041313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.032478 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 569445, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032488 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569495, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0041313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032495 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 569445, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032502 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569495, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0041313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032509 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 569445, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032522 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 569445, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032535 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569495, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0041313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032542 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 569445, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032554 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 569488, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0031314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032561 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 569488, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0031314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032568 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 569488, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0031314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032575 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 569488, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0031314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032586 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 569488, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0031314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032599 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 569406, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9891312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032606 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 569445, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032616 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 569445, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.032623 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 569406, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9891312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032630 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 569406, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9891312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032637 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 569522, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032720 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 569406, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9891312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032740 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 569406, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9891312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032747 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 569488, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0031314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032754 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 569522, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032765 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 569522, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032772 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 569522, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032779 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 569452, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9961312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032795 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 569488, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0031314, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.032803 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 569522, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032810 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 569452, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9961312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032817 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 569452, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9961312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032828 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 569452, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9961312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032835 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 569406, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9891312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032842 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 569433, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032858 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 569433, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032866 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 569452, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9961312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032874 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569393, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.987131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032881 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 569522, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032891 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 569433, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032899 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569393, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.987131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032906 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 569433, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032920 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 569452, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9961312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032928 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 569518, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032935 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 569406, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9891312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.032942 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569393, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.987131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032953 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 569433, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032960 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 569518, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032972 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569393, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.987131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032980 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 569414, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9921312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032990 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 569433, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.032998 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 569518, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033005 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 569414, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9921312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033015 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569393, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.987131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033022 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 569522, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033034 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569393, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.987131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033041 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 569518, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033051 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 569449, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033059 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 569449, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033066 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 569518, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033077 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 569414, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9921312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033084 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 569518, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033095 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 569414, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9921312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033102 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 569501, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0051315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033113 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 569414, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9921312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033121 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 569452, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9961312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033128 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 569414, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9921312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033138 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 569501, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0051315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033145 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 569449, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033157 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 569449, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033164 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 569397, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.988131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033176 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 569449, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033183 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 569449, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033190 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 569397, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.988131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033202 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 569427, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033213 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 569501, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0051315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033221 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 569501, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0051315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033228 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 569501, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0051315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033239 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 569433, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033247 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 569439, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9941313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033254 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 569397, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.988131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033264 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 569427, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033276 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 569397, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.988131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033284 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 569501, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0051315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033291 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 569397, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.988131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033303 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569409, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9901311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033311 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 569427, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033318 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569393, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.987131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033329 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 569397, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.988131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033341 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 569439, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9941313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033348 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 569427, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033355 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 569427, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033366 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 569478, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0021315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033373 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569409, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9901311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033380 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 569439, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9941313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033390 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 569439, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9941313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033402 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 569427, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033409 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 569439, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9941313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033416 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 569518, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0081315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033426 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 569462, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0011313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033434 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.033441 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569409, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9901311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033448 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 569478, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0021315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033462 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 569439, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9941313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033470 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569409, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9901311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033477 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569409, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9901311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033484 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 569462, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0011313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033490 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.033501 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 569478, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0021315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033508 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569409, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9901311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033515 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 569478, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0021315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033530 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 569478, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0021315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033537 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 569478, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0021315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033544 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 569462, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0011313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033551 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.033558 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 569414, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9921312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033569 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 569462, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0011313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033576 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.033583 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 569462, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0011313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033595 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.033602 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 569462, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0011313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-07-12 20:32:43.033609 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.033616 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 569449, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9951313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033623 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 569501, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0051315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033630 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 569397, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.988131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033637 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 569427, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9931312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033715 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 569439, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9941313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033729 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 569409, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9901311, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033768 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 569478, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0021315, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033782 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 569462, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349799.0011313, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-07-12 20:32:43.033789 | orchestrator | 2025-07-12 20:32:43.033796 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-07-12 20:32:43.033803 | orchestrator | Saturday 12 July 2025 20:29:54 +0000 (0:00:33.653) 0:01:01.800 ********* 2025-07-12 20:32:43.033810 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:32:43.033817 | orchestrator | 2025-07-12 20:32:43.033824 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-07-12 20:32:43.033831 | orchestrator | Saturday 12 July 2025 20:29:55 +0000 (0:00:01.189) 0:01:02.989 ********* 2025-07-12 20:32:43.033838 | orchestrator | [WARNING]: Skipped 2025-07-12 20:32:43.033845 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.033852 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-07-12 20:32:43.033859 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.033866 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-07-12 20:32:43.033872 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:32:43.033879 | orchestrator | [WARNING]: Skipped 2025-07-12 20:32:43.033886 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.033892 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-07-12 20:32:43.033899 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.033905 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-07-12 20:32:43.033912 | orchestrator | [WARNING]: Skipped 2025-07-12 20:32:43.033919 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.033925 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-07-12 20:32:43.033932 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.033938 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-07-12 20:32:43.033945 | orchestrator | [WARNING]: Skipped 2025-07-12 20:32:43.033952 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.033958 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-07-12 20:32:43.033964 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.033971 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-07-12 20:32:43.033978 | orchestrator | [WARNING]: Skipped 2025-07-12 20:32:43.033990 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.033997 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-07-12 20:32:43.034003 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.034010 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-07-12 20:32:43.034107 | orchestrator | [WARNING]: Skipped 2025-07-12 20:32:43.034122 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.034129 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-07-12 20:32:43.034136 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.034143 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-07-12 20:32:43.034150 | orchestrator | [WARNING]: Skipped 2025-07-12 20:32:43.034157 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.034163 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-07-12 20:32:43.034170 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-07-12 20:32:43.034177 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-07-12 20:32:43.034184 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:32:43.034191 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 20:32:43.034198 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-07-12 20:32:43.034205 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-07-12 20:32:43.034211 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 20:32:43.034218 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 20:32:43.034225 | orchestrator | 2025-07-12 20:32:43.034232 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-07-12 20:32:43.034239 | orchestrator | Saturday 12 July 2025 20:29:59 +0000 (0:00:03.165) 0:01:06.155 ********* 2025-07-12 20:32:43.034246 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:32:43.034253 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.034259 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:32:43.034265 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.034272 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:32:43.034278 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.034284 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:32:43.034291 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.034297 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:32:43.034303 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.034310 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-07-12 20:32:43.034321 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.034328 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-07-12 20:32:43.034335 | orchestrator | 2025-07-12 20:32:43.034341 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-07-12 20:32:43.034347 | orchestrator | Saturday 12 July 2025 20:30:27 +0000 (0:00:28.805) 0:01:34.961 ********* 2025-07-12 20:32:43.034354 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:32:43.034360 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.034366 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:32:43.034374 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.034380 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:32:43.034394 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.034400 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:32:43.034407 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.034413 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:32:43.034420 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.034426 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-07-12 20:32:43.034432 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.034439 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-07-12 20:32:43.034445 | orchestrator | 2025-07-12 20:32:43.034451 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-07-12 20:32:43.034457 | orchestrator | Saturday 12 July 2025 20:30:33 +0000 (0:00:05.858) 0:01:40.820 ********* 2025-07-12 20:32:43.034464 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:32:43.034471 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.034477 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:32:43.034484 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.034490 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:32:43.034496 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.034503 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:32:43.034509 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.034520 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-07-12 20:32:43.034527 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:32:43.034533 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.034540 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-07-12 20:32:43.034546 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.034553 | orchestrator | 2025-07-12 20:32:43.034559 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-07-12 20:32:43.034565 | orchestrator | Saturday 12 July 2025 20:30:36 +0000 (0:00:03.200) 0:01:44.020 ********* 2025-07-12 20:32:43.034571 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:32:43.034578 | orchestrator | 2025-07-12 20:32:43.034584 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-07-12 20:32:43.034590 | orchestrator | Saturday 12 July 2025 20:30:38 +0000 (0:00:01.677) 0:01:45.697 ********* 2025-07-12 20:32:43.034596 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:32:43.034603 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.034609 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.034616 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.034622 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.034629 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.034635 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.034641 | orchestrator | 2025-07-12 20:32:43.034667 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-07-12 20:32:43.034674 | orchestrator | Saturday 12 July 2025 20:30:39 +0000 (0:00:01.172) 0:01:46.870 ********* 2025-07-12 20:32:43.034681 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:32:43.034695 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.034701 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.034707 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:43.034713 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.034720 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:32:43.034726 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:32:43.034732 | orchestrator | 2025-07-12 20:32:43.034738 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-07-12 20:32:43.034745 | orchestrator | Saturday 12 July 2025 20:30:44 +0000 (0:00:04.349) 0:01:51.219 ********* 2025-07-12 20:32:43.034751 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:32:43.034758 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:32:43.034768 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:32:43.034775 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.034781 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:32:43.034787 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.034794 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:32:43.034800 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.034806 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:32:43.034813 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.034819 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:32:43.034825 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.034831 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-07-12 20:32:43.034837 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.034843 | orchestrator | 2025-07-12 20:32:43.034850 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-07-12 20:32:43.034856 | orchestrator | Saturday 12 July 2025 20:30:46 +0000 (0:00:02.366) 0:01:53.586 ********* 2025-07-12 20:32:43.034862 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:32:43.034869 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:32:43.034875 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.034882 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.034888 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:32:43.034894 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.034900 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-07-12 20:32:43.034906 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:32:43.034912 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.034922 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:32:43.034932 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.034942 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-07-12 20:32:43.034953 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.034964 | orchestrator | 2025-07-12 20:32:43.034975 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-07-12 20:32:43.034986 | orchestrator | Saturday 12 July 2025 20:30:49 +0000 (0:00:02.577) 0:01:56.164 ********* 2025-07-12 20:32:43.034992 | orchestrator | [WARNING]: Skipped 2025-07-12 20:32:43.035004 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-07-12 20:32:43.035018 | orchestrator | due to this access issue: 2025-07-12 20:32:43.035024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-07-12 20:32:43.035030 | orchestrator | not a directory 2025-07-12 20:32:43.035036 | orchestrator | ok: [testbed-manager -> localhost] 2025-07-12 20:32:43.035042 | orchestrator | 2025-07-12 20:32:43.035048 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-07-12 20:32:43.035055 | orchestrator | Saturday 12 July 2025 20:30:50 +0000 (0:00:01.378) 0:01:57.543 ********* 2025-07-12 20:32:43.035061 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:32:43.035067 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.035073 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.035079 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.035085 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.035091 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.035097 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.035103 | orchestrator | 2025-07-12 20:32:43.035110 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-07-12 20:32:43.035117 | orchestrator | Saturday 12 July 2025 20:30:51 +0000 (0:00:01.524) 0:01:59.067 ********* 2025-07-12 20:32:43.035123 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:32:43.035129 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:32:43.035135 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:32:43.035141 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:32:43.035147 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:32:43.035153 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:32:43.035159 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:32:43.035166 | orchestrator | 2025-07-12 20:32:43.035172 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-07-12 20:32:43.035178 | orchestrator | Saturday 12 July 2025 20:30:53 +0000 (0:00:01.186) 0:02:00.253 ********* 2025-07-12 20:32:43.035187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.035207 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-07-12 20:32:43.035220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.035228 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.035244 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.035251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.035258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.035264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.035275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.035283 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.035290 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.035303 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.035314 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-07-12 20:32:43.035321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.035327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.035334 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.035344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.035351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.035365 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-07-12 20:32:43.035378 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.035385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.035392 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.035402 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.035409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.035416 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.035428 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-07-12 20:32:43.035435 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.035445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.035452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-07-12 20:32:43.035458 | orchestrator | 2025-07-12 20:32:43.035465 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-07-12 20:32:43.035471 | orchestrator | Saturday 12 July 2025 20:30:57 +0000 (0:00:04.563) 0:02:04.817 ********* 2025-07-12 20:32:43.035478 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-07-12 20:32:43.035484 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:32:43.035490 | orchestrator | 2025-07-12 20:32:43.035496 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:32:43.035503 | orchestrator | Saturday 12 July 2025 20:30:58 +0000 (0:00:01.208) 0:02:06.026 ********* 2025-07-12 20:32:43.035509 | orchestrator | 2025-07-12 20:32:43.035515 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:32:43.035522 | orchestrator | Saturday 12 July 2025 20:30:59 +0000 (0:00:00.078) 0:02:06.104 ********* 2025-07-12 20:32:43.035528 | orchestrator | 2025-07-12 20:32:43.035534 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:32:43.035540 | orchestrator | Saturday 12 July 2025 20:30:59 +0000 (0:00:00.070) 0:02:06.175 ********* 2025-07-12 20:32:43.035547 | orchestrator | 2025-07-12 20:32:43.035553 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:32:43.035565 | orchestrator | Saturday 12 July 2025 20:30:59 +0000 (0:00:00.068) 0:02:06.244 ********* 2025-07-12 20:32:43.035576 | orchestrator | 2025-07-12 20:32:43.035583 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:32:43.035589 | orchestrator | Saturday 12 July 2025 20:30:59 +0000 (0:00:00.065) 0:02:06.310 ********* 2025-07-12 20:32:43.035595 | orchestrator | 2025-07-12 20:32:43.035601 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:32:43.035607 | orchestrator | Saturday 12 July 2025 20:30:59 +0000 (0:00:00.094) 0:02:06.405 ********* 2025-07-12 20:32:43.035613 | orchestrator | 2025-07-12 20:32:43.035620 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-07-12 20:32:43.035626 | orchestrator | Saturday 12 July 2025 20:30:59 +0000 (0:00:00.246) 0:02:06.651 ********* 2025-07-12 20:32:43.035632 | orchestrator | 2025-07-12 20:32:43.035638 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-07-12 20:32:43.035694 | orchestrator | Saturday 12 July 2025 20:30:59 +0000 (0:00:00.086) 0:02:06.738 ********* 2025-07-12 20:32:43.035708 | orchestrator | changed: [testbed-manager] 2025-07-12 20:32:43.035719 | orchestrator | 2025-07-12 20:32:43.035728 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-07-12 20:32:43.035735 | orchestrator | Saturday 12 July 2025 20:31:17 +0000 (0:00:18.200) 0:02:24.938 ********* 2025-07-12 20:32:43.035741 | orchestrator | changed: [testbed-manager] 2025-07-12 20:32:43.035747 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:32:43.035754 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:32:43.035760 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:32:43.035766 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:43.035773 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:32:43.035779 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:32:43.035785 | orchestrator | 2025-07-12 20:32:43.035791 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-07-12 20:32:43.035797 | orchestrator | Saturday 12 July 2025 20:31:31 +0000 (0:00:13.782) 0:02:38.721 ********* 2025-07-12 20:32:43.035803 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:32:43.035810 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:43.035816 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:32:43.035823 | orchestrator | 2025-07-12 20:32:43.035829 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-07-12 20:32:43.035835 | orchestrator | Saturday 12 July 2025 20:31:42 +0000 (0:00:11.116) 0:02:49.838 ********* 2025-07-12 20:32:43.035841 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:32:43.035847 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:43.035854 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:32:43.035860 | orchestrator | 2025-07-12 20:32:43.035866 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-07-12 20:32:43.035872 | orchestrator | Saturday 12 July 2025 20:31:52 +0000 (0:00:09.962) 0:02:59.800 ********* 2025-07-12 20:32:43.035878 | orchestrator | changed: [testbed-manager] 2025-07-12 20:32:43.035884 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:32:43.035891 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:32:43.035897 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:32:43.035903 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:32:43.035909 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:32:43.035915 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:43.035922 | orchestrator | 2025-07-12 20:32:43.035933 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-07-12 20:32:43.035940 | orchestrator | Saturday 12 July 2025 20:32:07 +0000 (0:00:14.920) 0:03:14.721 ********* 2025-07-12 20:32:43.035946 | orchestrator | changed: [testbed-manager] 2025-07-12 20:32:43.035953 | orchestrator | 2025-07-12 20:32:43.035959 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-07-12 20:32:43.035965 | orchestrator | Saturday 12 July 2025 20:32:15 +0000 (0:00:08.228) 0:03:22.950 ********* 2025-07-12 20:32:43.035971 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:32:43.035978 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:32:43.035993 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:32:43.035999 | orchestrator | 2025-07-12 20:32:43.036005 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-07-12 20:32:43.036012 | orchestrator | Saturday 12 July 2025 20:32:25 +0000 (0:00:09.907) 0:03:32.858 ********* 2025-07-12 20:32:43.036018 | orchestrator | changed: [testbed-manager] 2025-07-12 20:32:43.036024 | orchestrator | 2025-07-12 20:32:43.036030 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-07-12 20:32:43.036036 | orchestrator | Saturday 12 July 2025 20:32:35 +0000 (0:00:10.149) 0:03:43.008 ********* 2025-07-12 20:32:43.036043 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:32:43.036049 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:32:43.036055 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:32:43.036061 | orchestrator | 2025-07-12 20:32:43.036067 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:32:43.036074 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 20:32:43.036081 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:32:43.036087 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:32:43.036094 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:32:43.036100 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 20:32:43.036111 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 20:32:43.036117 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-07-12 20:32:43.036124 | orchestrator | 2025-07-12 20:32:43.036130 | orchestrator | 2025-07-12 20:32:43.036137 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:32:43.036143 | orchestrator | Saturday 12 July 2025 20:32:42 +0000 (0:00:06.315) 0:03:49.323 ********* 2025-07-12 20:32:43.036149 | orchestrator | =============================================================================== 2025-07-12 20:32:43.036155 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 33.65s 2025-07-12 20:32:43.036161 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 28.81s 2025-07-12 20:32:43.036168 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.20s 2025-07-12 20:32:43.036174 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 14.92s 2025-07-12 20:32:43.036180 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 13.78s 2025-07-12 20:32:43.036186 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.12s 2025-07-12 20:32:43.036192 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------ 10.15s 2025-07-12 20:32:43.036198 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 9.96s 2025-07-12 20:32:43.036204 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.91s 2025-07-12 20:32:43.036211 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.23s 2025-07-12 20:32:43.036216 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 7.06s 2025-07-12 20:32:43.036221 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 6.32s 2025-07-12 20:32:43.036227 | orchestrator | prometheus : Copying over config.json files ----------------------------- 5.88s 2025-07-12 20:32:43.036238 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.86s 2025-07-12 20:32:43.036244 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.56s 2025-07-12 20:32:43.036249 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 4.35s 2025-07-12 20:32:43.036255 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.06s 2025-07-12 20:32:43.036260 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.20s 2025-07-12 20:32:43.036266 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.16s 2025-07-12 20:32:43.036271 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.58s 2025-07-12 20:32:43.036276 | orchestrator | 2025-07-12 20:32:43 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:43.036285 | orchestrator | 2025-07-12 20:32:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:46.076912 | orchestrator | 2025-07-12 20:32:46 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:32:46.080371 | orchestrator | 2025-07-12 20:32:46 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:46.082012 | orchestrator | 2025-07-12 20:32:46 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:46.083959 | orchestrator | 2025-07-12 20:32:46 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:46.084170 | orchestrator | 2025-07-12 20:32:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:49.132814 | orchestrator | 2025-07-12 20:32:49 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:32:49.134558 | orchestrator | 2025-07-12 20:32:49 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:49.136294 | orchestrator | 2025-07-12 20:32:49 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:49.137899 | orchestrator | 2025-07-12 20:32:49 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:49.138264 | orchestrator | 2025-07-12 20:32:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:52.183795 | orchestrator | 2025-07-12 20:32:52 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:32:52.185333 | orchestrator | 2025-07-12 20:32:52 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:52.187970 | orchestrator | 2025-07-12 20:32:52 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:52.189720 | orchestrator | 2025-07-12 20:32:52 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:52.189758 | orchestrator | 2025-07-12 20:32:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:55.235550 | orchestrator | 2025-07-12 20:32:55 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:32:55.237905 | orchestrator | 2025-07-12 20:32:55 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:55.239368 | orchestrator | 2025-07-12 20:32:55 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:55.240637 | orchestrator | 2025-07-12 20:32:55 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:55.240703 | orchestrator | 2025-07-12 20:32:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:32:58.274087 | orchestrator | 2025-07-12 20:32:58 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:32:58.278143 | orchestrator | 2025-07-12 20:32:58 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:32:58.279928 | orchestrator | 2025-07-12 20:32:58 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:32:58.281341 | orchestrator | 2025-07-12 20:32:58 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:32:58.281365 | orchestrator | 2025-07-12 20:32:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:01.321628 | orchestrator | 2025-07-12 20:33:01 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:01.322318 | orchestrator | 2025-07-12 20:33:01 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:01.323238 | orchestrator | 2025-07-12 20:33:01 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:01.324465 | orchestrator | 2025-07-12 20:33:01 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:33:01.324563 | orchestrator | 2025-07-12 20:33:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:04.368055 | orchestrator | 2025-07-12 20:33:04 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:04.369085 | orchestrator | 2025-07-12 20:33:04 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:04.369126 | orchestrator | 2025-07-12 20:33:04 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:04.369829 | orchestrator | 2025-07-12 20:33:04 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:33:04.369868 | orchestrator | 2025-07-12 20:33:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:07.408178 | orchestrator | 2025-07-12 20:33:07 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:07.408746 | orchestrator | 2025-07-12 20:33:07 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:07.409839 | orchestrator | 2025-07-12 20:33:07 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:07.410470 | orchestrator | 2025-07-12 20:33:07 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state STARTED 2025-07-12 20:33:07.410514 | orchestrator | 2025-07-12 20:33:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:10.449025 | orchestrator | 2025-07-12 20:33:10 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:10.449282 | orchestrator | 2025-07-12 20:33:10 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:10.450268 | orchestrator | 2025-07-12 20:33:10 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:10.452441 | orchestrator | 2025-07-12 20:33:10 | INFO  | Task 19cd6dcc-9300-4d95-abbd-4f32ed4f8abe is in state SUCCESS 2025-07-12 20:33:10.454397 | orchestrator | 2025-07-12 20:33:10.454509 | orchestrator | 2025-07-12 20:33:10.454527 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:33:10.454540 | orchestrator | 2025-07-12 20:33:10.454552 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:33:10.454563 | orchestrator | Saturday 12 July 2025 20:29:03 +0000 (0:00:00.299) 0:00:00.299 ********* 2025-07-12 20:33:10.454575 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:33:10.454587 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:33:10.454600 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:33:10.454619 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:33:10.454638 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:33:10.454657 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:33:10.454779 | orchestrator | 2025-07-12 20:33:10.454802 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:33:10.454814 | orchestrator | Saturday 12 July 2025 20:29:05 +0000 (0:00:01.981) 0:00:02.280 ********* 2025-07-12 20:33:10.454825 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-07-12 20:33:10.454837 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-07-12 20:33:10.454848 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-07-12 20:33:10.454874 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-07-12 20:33:10.454894 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-07-12 20:33:10.454913 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-07-12 20:33:10.455576 | orchestrator | 2025-07-12 20:33:10.455594 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-07-12 20:33:10.455605 | orchestrator | 2025-07-12 20:33:10.456886 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 20:33:10.456907 | orchestrator | Saturday 12 July 2025 20:29:06 +0000 (0:00:01.209) 0:00:03.490 ********* 2025-07-12 20:33:10.456919 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:33:10.456933 | orchestrator | 2025-07-12 20:33:10.456982 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-07-12 20:33:10.457004 | orchestrator | Saturday 12 July 2025 20:29:08 +0000 (0:00:01.384) 0:00:04.875 ********* 2025-07-12 20:33:10.457023 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-07-12 20:33:10.457041 | orchestrator | 2025-07-12 20:33:10.457060 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-07-12 20:33:10.457079 | orchestrator | Saturday 12 July 2025 20:29:11 +0000 (0:00:03.124) 0:00:07.999 ********* 2025-07-12 20:33:10.457098 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-07-12 20:33:10.457119 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-07-12 20:33:10.457137 | orchestrator | 2025-07-12 20:33:10.457157 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-07-12 20:33:10.457175 | orchestrator | Saturday 12 July 2025 20:29:16 +0000 (0:00:05.548) 0:00:13.548 ********* 2025-07-12 20:33:10.457190 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:33:10.457201 | orchestrator | 2025-07-12 20:33:10.457212 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-07-12 20:33:10.457223 | orchestrator | Saturday 12 July 2025 20:29:19 +0000 (0:00:02.867) 0:00:16.416 ********* 2025-07-12 20:33:10.457233 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:33:10.457245 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-07-12 20:33:10.457256 | orchestrator | 2025-07-12 20:33:10.457266 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-07-12 20:33:10.457278 | orchestrator | Saturday 12 July 2025 20:29:23 +0000 (0:00:03.539) 0:00:19.955 ********* 2025-07-12 20:33:10.457288 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:33:10.457299 | orchestrator | 2025-07-12 20:33:10.457309 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-07-12 20:33:10.457320 | orchestrator | Saturday 12 July 2025 20:29:26 +0000 (0:00:03.693) 0:00:23.649 ********* 2025-07-12 20:33:10.457330 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-07-12 20:33:10.457341 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-07-12 20:33:10.457383 | orchestrator | 2025-07-12 20:33:10.457394 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-07-12 20:33:10.457407 | orchestrator | Saturday 12 July 2025 20:29:35 +0000 (0:00:08.591) 0:00:32.241 ********* 2025-07-12 20:33:10.457453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.457497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.457510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.457524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.457538 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.457550 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.457583 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.457602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.457617 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.457630 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.457643 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.457671 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.457762 | orchestrator | 2025-07-12 20:33:10.457784 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 20:33:10.457804 | orchestrator | Saturday 12 July 2025 20:29:40 +0000 (0:00:04.993) 0:00:37.234 ********* 2025-07-12 20:33:10.457824 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.457845 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:33:10.457865 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:33:10.457880 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:33:10.457890 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:33:10.457901 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:33:10.457912 | orchestrator | 2025-07-12 20:33:10.457923 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 20:33:10.457933 | orchestrator | Saturday 12 July 2025 20:29:42 +0000 (0:00:01.917) 0:00:39.151 ********* 2025-07-12 20:33:10.457944 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.457965 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:33:10.457984 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:33:10.458004 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:33:10.458109 | orchestrator | 2025-07-12 20:33:10.458122 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-07-12 20:33:10.458133 | orchestrator | Saturday 12 July 2025 20:29:44 +0000 (0:00:02.042) 0:00:41.194 ********* 2025-07-12 20:33:10.458144 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-07-12 20:33:10.458156 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-07-12 20:33:10.458167 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-07-12 20:33:10.458177 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-07-12 20:33:10.458188 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-07-12 20:33:10.458199 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-07-12 20:33:10.458210 | orchestrator | 2025-07-12 20:33:10.458220 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-07-12 20:33:10.458231 | orchestrator | Saturday 12 July 2025 20:29:47 +0000 (0:00:03.219) 0:00:44.413 ********* 2025-07-12 20:33:10.458243 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:33:10.458267 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:33:10.458300 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:33:10.458316 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:33:10.458327 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:33:10.458337 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-07-12 20:33:10.458355 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:33:10.458374 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:33:10.458389 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:33:10.458400 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:33:10.458418 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:33:10.458429 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-07-12 20:33:10.458439 | orchestrator | 2025-07-12 20:33:10.458449 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-07-12 20:33:10.458459 | orchestrator | Saturday 12 July 2025 20:29:51 +0000 (0:00:04.211) 0:00:48.625 ********* 2025-07-12 20:33:10.458468 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:33:10.458480 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:33:10.458490 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-07-12 20:33:10.458499 | orchestrator | 2025-07-12 20:33:10.458509 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-07-12 20:33:10.458519 | orchestrator | Saturday 12 July 2025 20:29:53 +0000 (0:00:01.952) 0:00:50.578 ********* 2025-07-12 20:33:10.458535 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-07-12 20:33:10.458545 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-07-12 20:33:10.458555 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-07-12 20:33:10.458564 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:33:10.458574 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:33:10.458583 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-07-12 20:33:10.458593 | orchestrator | 2025-07-12 20:33:10.458603 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-07-12 20:33:10.458612 | orchestrator | Saturday 12 July 2025 20:29:57 +0000 (0:00:03.908) 0:00:54.487 ********* 2025-07-12 20:33:10.458622 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-07-12 20:33:10.458632 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-07-12 20:33:10.458641 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-07-12 20:33:10.458656 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-07-12 20:33:10.458666 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-07-12 20:33:10.458675 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-07-12 20:33:10.458708 | orchestrator | 2025-07-12 20:33:10.458718 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-07-12 20:33:10.458734 | orchestrator | Saturday 12 July 2025 20:29:59 +0000 (0:00:01.575) 0:00:56.062 ********* 2025-07-12 20:33:10.458744 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.458754 | orchestrator | 2025-07-12 20:33:10.458763 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-07-12 20:33:10.458773 | orchestrator | Saturday 12 July 2025 20:29:59 +0000 (0:00:00.162) 0:00:56.225 ********* 2025-07-12 20:33:10.458783 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.458792 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:33:10.458802 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:33:10.458811 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:33:10.458820 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:33:10.458830 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:33:10.458839 | orchestrator | 2025-07-12 20:33:10.458849 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 20:33:10.458858 | orchestrator | Saturday 12 July 2025 20:30:01 +0000 (0:00:01.614) 0:00:57.839 ********* 2025-07-12 20:33:10.458869 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:33:10.458880 | orchestrator | 2025-07-12 20:33:10.458890 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-07-12 20:33:10.458899 | orchestrator | Saturday 12 July 2025 20:30:03 +0000 (0:00:01.925) 0:00:59.765 ********* 2025-07-12 20:33:10.458910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.458921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.458940 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.458967 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.458978 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.458988 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.458998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459038 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459059 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459071 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459080 | orchestrator | 2025-07-12 20:33:10.459091 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-07-12 20:33:10.459100 | orchestrator | Saturday 12 July 2025 20:30:07 +0000 (0:00:03.956) 0:01:03.721 ********* 2025-07-12 20:33:10.459116 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:33:10.459133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459144 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:33:10.459159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:33:10.459169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:33:10.459189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459199 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.459209 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:33:10.459225 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459256 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:33:10.459266 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459287 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:33:10.459297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459331 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:33:10.459340 | orchestrator | 2025-07-12 20:33:10.459350 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-07-12 20:33:10.459360 | orchestrator | Saturday 12 July 2025 20:30:09 +0000 (0:00:02.748) 0:01:06.470 ********* 2025-07-12 20:33:10.459374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:33:10.459385 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459395 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.459405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:33:10.459415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459431 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:33:10.459447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:33:10.459462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459473 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:33:10.459483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459503 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:33:10.459513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459535 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459546 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:33:10.459559 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.459580 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:33:10.459590 | orchestrator | 2025-07-12 20:33:10.459600 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-07-12 20:33:10.459609 | orchestrator | Saturday 12 July 2025 20:30:12 +0000 (0:00:02.584) 0:01:09.054 ********* 2025-07-12 20:33:10.459620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.459635 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.459802 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.459821 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459832 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459880 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459906 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459917 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459927 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.459947 | orchestrator | 2025-07-12 20:33:10.459957 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-07-12 20:33:10.459966 | orchestrator | Saturday 12 July 2025 20:30:16 +0000 (0:00:04.086) 0:01:13.141 ********* 2025-07-12 20:33:10.459976 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 20:33:10.459986 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:33:10.459996 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 20:33:10.460006 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:33:10.460015 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 20:33:10.460025 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-07-12 20:33:10.460034 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:33:10.460044 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 20:33:10.460059 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-07-12 20:33:10.460070 | orchestrator | 2025-07-12 20:33:10.460080 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-07-12 20:33:10.460089 | orchestrator | Saturday 12 July 2025 20:30:19 +0000 (0:00:02.565) 0:01:15.706 ********* 2025-07-12 20:33:10.460104 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.460115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.460125 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460141 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460157 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.460172 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460199 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460219 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460235 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460260 | orchestrator | 2025-07-12 20:33:10.460270 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-07-12 20:33:10.460280 | orchestrator | Saturday 12 July 2025 20:30:32 +0000 (0:00:13.812) 0:01:29.519 ********* 2025-07-12 20:33:10.460290 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.460299 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:33:10.460307 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:33:10.460315 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:33:10.460329 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:33:10.460337 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:33:10.460345 | orchestrator | 2025-07-12 20:33:10.460353 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-07-12 20:33:10.460361 | orchestrator | Saturday 12 July 2025 20:30:36 +0000 (0:00:03.790) 0:01:33.309 ********* 2025-07-12 20:33:10.460369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:33:10.460377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.460385 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.460398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:33:10.460407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.460415 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:33:10.460427 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.460441 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.460449 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:33:10.460461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-07-12 20:33:10.460470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.460537 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.460552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.460566 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:33:10.460574 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:33:10.460582 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.460590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-07-12 20:33:10.460599 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:33:10.460664 | orchestrator | 2025-07-12 20:33:10.460672 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-07-12 20:33:10.460705 | orchestrator | Saturday 12 July 2025 20:30:38 +0000 (0:00:02.037) 0:01:35.347 ********* 2025-07-12 20:33:10.460714 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.460722 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:33:10.460731 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:33:10.460738 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:33:10.460746 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:33:10.460754 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:33:10.460762 | orchestrator | 2025-07-12 20:33:10.460770 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-07-12 20:33:10.460778 | orchestrator | Saturday 12 July 2025 20:30:40 +0000 (0:00:01.592) 0:01:36.943 ********* 2025-07-12 20:33:10.460793 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.460813 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.460822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-07-12 20:33:10.460831 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460882 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460899 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460907 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460920 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-07-12 20:33:10.460934 | orchestrator | 2025-07-12 20:33:10.460942 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-07-12 20:33:10.460974 | orchestrator | Saturday 12 July 2025 20:30:44 +0000 (0:00:04.429) 0:01:41.372 ********* 2025-07-12 20:33:10.460983 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.460991 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:33:10.460999 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:33:10.461007 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:33:10.461015 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:33:10.461023 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:33:10.461030 | orchestrator | 2025-07-12 20:33:10.461038 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-07-12 20:33:10.461046 | orchestrator | Saturday 12 July 2025 20:30:46 +0000 (0:00:01.391) 0:01:42.763 ********* 2025-07-12 20:33:10.461054 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:33:10.461062 | orchestrator | 2025-07-12 20:33:10.461070 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-07-12 20:33:10.461078 | orchestrator | Saturday 12 July 2025 20:30:47 +0000 (0:00:01.886) 0:01:44.649 ********* 2025-07-12 20:33:10.461086 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:33:10.461094 | orchestrator | 2025-07-12 20:33:10.461102 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-07-12 20:33:10.461110 | orchestrator | Saturday 12 July 2025 20:30:50 +0000 (0:00:02.399) 0:01:47.048 ********* 2025-07-12 20:33:10.461117 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:33:10.461125 | orchestrator | 2025-07-12 20:33:10.461133 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:33:10.461141 | orchestrator | Saturday 12 July 2025 20:31:07 +0000 (0:00:16.751) 0:02:03.800 ********* 2025-07-12 20:33:10.461149 | orchestrator | 2025-07-12 20:33:10.461157 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:33:10.461165 | orchestrator | Saturday 12 July 2025 20:31:07 +0000 (0:00:00.075) 0:02:03.875 ********* 2025-07-12 20:33:10.461172 | orchestrator | 2025-07-12 20:33:10.461181 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:33:10.461188 | orchestrator | Saturday 12 July 2025 20:31:07 +0000 (0:00:00.071) 0:02:03.946 ********* 2025-07-12 20:33:10.461196 | orchestrator | 2025-07-12 20:33:10.461204 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:33:10.461212 | orchestrator | Saturday 12 July 2025 20:31:07 +0000 (0:00:00.073) 0:02:04.020 ********* 2025-07-12 20:33:10.461220 | orchestrator | 2025-07-12 20:33:10.461228 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:33:10.461236 | orchestrator | Saturday 12 July 2025 20:31:07 +0000 (0:00:00.073) 0:02:04.094 ********* 2025-07-12 20:33:10.461243 | orchestrator | 2025-07-12 20:33:10.461251 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-07-12 20:33:10.461259 | orchestrator | Saturday 12 July 2025 20:31:07 +0000 (0:00:00.067) 0:02:04.162 ********* 2025-07-12 20:33:10.461267 | orchestrator | 2025-07-12 20:33:10.461275 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-07-12 20:33:10.461283 | orchestrator | Saturday 12 July 2025 20:31:07 +0000 (0:00:00.071) 0:02:04.233 ********* 2025-07-12 20:33:10.461291 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:33:10.461298 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:33:10.461306 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:33:10.461314 | orchestrator | 2025-07-12 20:33:10.461322 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-07-12 20:33:10.461336 | orchestrator | Saturday 12 July 2025 20:31:32 +0000 (0:00:25.220) 0:02:29.453 ********* 2025-07-12 20:33:10.461344 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:33:10.461352 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:33:10.461359 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:33:10.461367 | orchestrator | 2025-07-12 20:33:10.461375 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-07-12 20:33:10.461383 | orchestrator | Saturday 12 July 2025 20:31:41 +0000 (0:00:08.308) 0:02:37.762 ********* 2025-07-12 20:33:10.461391 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:33:10.461399 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:33:10.461407 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:33:10.461414 | orchestrator | 2025-07-12 20:33:10.461422 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-07-12 20:33:10.461430 | orchestrator | Saturday 12 July 2025 20:33:01 +0000 (0:01:20.519) 0:03:58.282 ********* 2025-07-12 20:33:10.461438 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:33:10.461446 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:33:10.461454 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:33:10.461461 | orchestrator | 2025-07-12 20:33:10.461469 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-07-12 20:33:10.461477 | orchestrator | Saturday 12 July 2025 20:33:08 +0000 (0:00:06.630) 0:04:04.912 ********* 2025-07-12 20:33:10.461485 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:33:10.461493 | orchestrator | 2025-07-12 20:33:10.461501 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:33:10.461513 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-07-12 20:33:10.461522 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 20:33:10.461530 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 20:33:10.461538 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 20:33:10.461546 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 20:33:10.461559 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-07-12 20:33:10.461567 | orchestrator | 2025-07-12 20:33:10.461575 | orchestrator | 2025-07-12 20:33:10.461583 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:33:10.461591 | orchestrator | Saturday 12 July 2025 20:33:08 +0000 (0:00:00.743) 0:04:05.656 ********* 2025-07-12 20:33:10.461598 | orchestrator | =============================================================================== 2025-07-12 20:33:10.461606 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 80.52s 2025-07-12 20:33:10.461614 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 25.22s 2025-07-12 20:33:10.461622 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 16.75s 2025-07-12 20:33:10.461630 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.81s 2025-07-12 20:33:10.461638 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.59s 2025-07-12 20:33:10.461646 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 8.31s 2025-07-12 20:33:10.461653 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.63s 2025-07-12 20:33:10.461661 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 5.55s 2025-07-12 20:33:10.461674 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 4.99s 2025-07-12 20:33:10.461710 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.43s 2025-07-12 20:33:10.461725 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.21s 2025-07-12 20:33:10.461739 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.09s 2025-07-12 20:33:10.461753 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.96s 2025-07-12 20:33:10.461762 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.91s 2025-07-12 20:33:10.461770 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.79s 2025-07-12 20:33:10.461778 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.69s 2025-07-12 20:33:10.461786 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.54s 2025-07-12 20:33:10.461794 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 3.22s 2025-07-12 20:33:10.461801 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.12s 2025-07-12 20:33:10.461809 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 2.87s 2025-07-12 20:33:10.461817 | orchestrator | 2025-07-12 20:33:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:13.490357 | orchestrator | 2025-07-12 20:33:13 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:13.490482 | orchestrator | 2025-07-12 20:33:13 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:13.491039 | orchestrator | 2025-07-12 20:33:13 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:13.495180 | orchestrator | 2025-07-12 20:33:13 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:13.495275 | orchestrator | 2025-07-12 20:33:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:16.536392 | orchestrator | 2025-07-12 20:33:16 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:16.537150 | orchestrator | 2025-07-12 20:33:16 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:16.539040 | orchestrator | 2025-07-12 20:33:16 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:16.539080 | orchestrator | 2025-07-12 20:33:16 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:16.539095 | orchestrator | 2025-07-12 20:33:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:19.566980 | orchestrator | 2025-07-12 20:33:19 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:19.567641 | orchestrator | 2025-07-12 20:33:19 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:19.568650 | orchestrator | 2025-07-12 20:33:19 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:19.569756 | orchestrator | 2025-07-12 20:33:19 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:19.569837 | orchestrator | 2025-07-12 20:33:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:22.608013 | orchestrator | 2025-07-12 20:33:22 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:22.608547 | orchestrator | 2025-07-12 20:33:22 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:22.609503 | orchestrator | 2025-07-12 20:33:22 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:22.610396 | orchestrator | 2025-07-12 20:33:22 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:22.610421 | orchestrator | 2025-07-12 20:33:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:25.649557 | orchestrator | 2025-07-12 20:33:25 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:25.650897 | orchestrator | 2025-07-12 20:33:25 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:25.652193 | orchestrator | 2025-07-12 20:33:25 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:25.656304 | orchestrator | 2025-07-12 20:33:25 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:25.656340 | orchestrator | 2025-07-12 20:33:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:28.698162 | orchestrator | 2025-07-12 20:33:28 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:28.698852 | orchestrator | 2025-07-12 20:33:28 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:28.700242 | orchestrator | 2025-07-12 20:33:28 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:28.703305 | orchestrator | 2025-07-12 20:33:28 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:28.703367 | orchestrator | 2025-07-12 20:33:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:31.752957 | orchestrator | 2025-07-12 20:33:31 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:31.755924 | orchestrator | 2025-07-12 20:33:31 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:31.756539 | orchestrator | 2025-07-12 20:33:31 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:31.758357 | orchestrator | 2025-07-12 20:33:31 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:31.758401 | orchestrator | 2025-07-12 20:33:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:34.817199 | orchestrator | 2025-07-12 20:33:34 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:34.817980 | orchestrator | 2025-07-12 20:33:34 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:34.819333 | orchestrator | 2025-07-12 20:33:34 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:34.820644 | orchestrator | 2025-07-12 20:33:34 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:34.821816 | orchestrator | 2025-07-12 20:33:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:37.869567 | orchestrator | 2025-07-12 20:33:37 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:37.871302 | orchestrator | 2025-07-12 20:33:37 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:37.874374 | orchestrator | 2025-07-12 20:33:37 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:37.877431 | orchestrator | 2025-07-12 20:33:37 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:37.877527 | orchestrator | 2025-07-12 20:33:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:40.919943 | orchestrator | 2025-07-12 20:33:40 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:40.921690 | orchestrator | 2025-07-12 20:33:40 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:40.923523 | orchestrator | 2025-07-12 20:33:40 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:40.926224 | orchestrator | 2025-07-12 20:33:40 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:40.926265 | orchestrator | 2025-07-12 20:33:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:43.983371 | orchestrator | 2025-07-12 20:33:43 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:43.983924 | orchestrator | 2025-07-12 20:33:43 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:43.989243 | orchestrator | 2025-07-12 20:33:43 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:43.989312 | orchestrator | 2025-07-12 20:33:43 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:43.989323 | orchestrator | 2025-07-12 20:33:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:47.020196 | orchestrator | 2025-07-12 20:33:47 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:47.021464 | orchestrator | 2025-07-12 20:33:47 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:47.022472 | orchestrator | 2025-07-12 20:33:47 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:47.022956 | orchestrator | 2025-07-12 20:33:47 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:47.022983 | orchestrator | 2025-07-12 20:33:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:50.057209 | orchestrator | 2025-07-12 20:33:50 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:50.057286 | orchestrator | 2025-07-12 20:33:50 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:50.057296 | orchestrator | 2025-07-12 20:33:50 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:50.057304 | orchestrator | 2025-07-12 20:33:50 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:50.057312 | orchestrator | 2025-07-12 20:33:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:53.084185 | orchestrator | 2025-07-12 20:33:53 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:53.085080 | orchestrator | 2025-07-12 20:33:53 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:53.085554 | orchestrator | 2025-07-12 20:33:53 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:53.086345 | orchestrator | 2025-07-12 20:33:53 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:53.086373 | orchestrator | 2025-07-12 20:33:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:56.116781 | orchestrator | 2025-07-12 20:33:56 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:56.117006 | orchestrator | 2025-07-12 20:33:56 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:56.117928 | orchestrator | 2025-07-12 20:33:56 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:56.118506 | orchestrator | 2025-07-12 20:33:56 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:56.118579 | orchestrator | 2025-07-12 20:33:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:33:59.159908 | orchestrator | 2025-07-12 20:33:59 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:33:59.160418 | orchestrator | 2025-07-12 20:33:59 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:33:59.166223 | orchestrator | 2025-07-12 20:33:59 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:33:59.167547 | orchestrator | 2025-07-12 20:33:59 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:33:59.167597 | orchestrator | 2025-07-12 20:33:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:02.207286 | orchestrator | 2025-07-12 20:34:02 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:02.207941 | orchestrator | 2025-07-12 20:34:02 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:02.208914 | orchestrator | 2025-07-12 20:34:02 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:02.210295 | orchestrator | 2025-07-12 20:34:02 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:02.210432 | orchestrator | 2025-07-12 20:34:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:05.247586 | orchestrator | 2025-07-12 20:34:05 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:05.248669 | orchestrator | 2025-07-12 20:34:05 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:05.250358 | orchestrator | 2025-07-12 20:34:05 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:05.250979 | orchestrator | 2025-07-12 20:34:05 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:05.251012 | orchestrator | 2025-07-12 20:34:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:08.287402 | orchestrator | 2025-07-12 20:34:08 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:08.287512 | orchestrator | 2025-07-12 20:34:08 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:08.287887 | orchestrator | 2025-07-12 20:34:08 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:08.289668 | orchestrator | 2025-07-12 20:34:08 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:08.289786 | orchestrator | 2025-07-12 20:34:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:11.324839 | orchestrator | 2025-07-12 20:34:11 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:11.328554 | orchestrator | 2025-07-12 20:34:11 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:11.328735 | orchestrator | 2025-07-12 20:34:11 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:11.329603 | orchestrator | 2025-07-12 20:34:11 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:11.329692 | orchestrator | 2025-07-12 20:34:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:14.383144 | orchestrator | 2025-07-12 20:34:14 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:14.385180 | orchestrator | 2025-07-12 20:34:14 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:14.385907 | orchestrator | 2025-07-12 20:34:14 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:14.386707 | orchestrator | 2025-07-12 20:34:14 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:14.386811 | orchestrator | 2025-07-12 20:34:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:17.431465 | orchestrator | 2025-07-12 20:34:17 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:17.432669 | orchestrator | 2025-07-12 20:34:17 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:17.433215 | orchestrator | 2025-07-12 20:34:17 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:17.439887 | orchestrator | 2025-07-12 20:34:17 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:17.440072 | orchestrator | 2025-07-12 20:34:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:20.485734 | orchestrator | 2025-07-12 20:34:20 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:20.488171 | orchestrator | 2025-07-12 20:34:20 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:20.489289 | orchestrator | 2025-07-12 20:34:20 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:20.490471 | orchestrator | 2025-07-12 20:34:20 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:20.490497 | orchestrator | 2025-07-12 20:34:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:23.533483 | orchestrator | 2025-07-12 20:34:23 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:23.533998 | orchestrator | 2025-07-12 20:34:23 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:23.535022 | orchestrator | 2025-07-12 20:34:23 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:23.536360 | orchestrator | 2025-07-12 20:34:23 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:23.537982 | orchestrator | 2025-07-12 20:34:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:26.566793 | orchestrator | 2025-07-12 20:34:26 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:26.567669 | orchestrator | 2025-07-12 20:34:26 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:26.568723 | orchestrator | 2025-07-12 20:34:26 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:26.569693 | orchestrator | 2025-07-12 20:34:26 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:26.569926 | orchestrator | 2025-07-12 20:34:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:29.606065 | orchestrator | 2025-07-12 20:34:29 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:29.606961 | orchestrator | 2025-07-12 20:34:29 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:29.608434 | orchestrator | 2025-07-12 20:34:29 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:29.609327 | orchestrator | 2025-07-12 20:34:29 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:29.609355 | orchestrator | 2025-07-12 20:34:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:32.647559 | orchestrator | 2025-07-12 20:34:32 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:32.648098 | orchestrator | 2025-07-12 20:34:32 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:32.649313 | orchestrator | 2025-07-12 20:34:32 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:32.650194 | orchestrator | 2025-07-12 20:34:32 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:32.651507 | orchestrator | 2025-07-12 20:34:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:35.693916 | orchestrator | 2025-07-12 20:34:35 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:35.695259 | orchestrator | 2025-07-12 20:34:35 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:35.696190 | orchestrator | 2025-07-12 20:34:35 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:35.698140 | orchestrator | 2025-07-12 20:34:35 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:35.698281 | orchestrator | 2025-07-12 20:34:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:38.732618 | orchestrator | 2025-07-12 20:34:38 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:38.734281 | orchestrator | 2025-07-12 20:34:38 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:38.735322 | orchestrator | 2025-07-12 20:34:38 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:38.736068 | orchestrator | 2025-07-12 20:34:38 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:38.736188 | orchestrator | 2025-07-12 20:34:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:41.780486 | orchestrator | 2025-07-12 20:34:41 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:41.781197 | orchestrator | 2025-07-12 20:34:41 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:41.782117 | orchestrator | 2025-07-12 20:34:41 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:41.783210 | orchestrator | 2025-07-12 20:34:41 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:41.785200 | orchestrator | 2025-07-12 20:34:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:44.835751 | orchestrator | 2025-07-12 20:34:44 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:44.836135 | orchestrator | 2025-07-12 20:34:44 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:44.837367 | orchestrator | 2025-07-12 20:34:44 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:44.838234 | orchestrator | 2025-07-12 20:34:44 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:44.838281 | orchestrator | 2025-07-12 20:34:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:47.869010 | orchestrator | 2025-07-12 20:34:47 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:47.871163 | orchestrator | 2025-07-12 20:34:47 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:47.872059 | orchestrator | 2025-07-12 20:34:47 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:47.872941 | orchestrator | 2025-07-12 20:34:47 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:47.872979 | orchestrator | 2025-07-12 20:34:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:50.913765 | orchestrator | 2025-07-12 20:34:50 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:50.916461 | orchestrator | 2025-07-12 20:34:50 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:50.917471 | orchestrator | 2025-07-12 20:34:50 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:50.919742 | orchestrator | 2025-07-12 20:34:50 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:50.919844 | orchestrator | 2025-07-12 20:34:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:53.952178 | orchestrator | 2025-07-12 20:34:53 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:53.953165 | orchestrator | 2025-07-12 20:34:53 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:53.953456 | orchestrator | 2025-07-12 20:34:53 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:53.954510 | orchestrator | 2025-07-12 20:34:53 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:53.954757 | orchestrator | 2025-07-12 20:34:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:34:57.001277 | orchestrator | 2025-07-12 20:34:56 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:34:57.004491 | orchestrator | 2025-07-12 20:34:56 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:34:57.005208 | orchestrator | 2025-07-12 20:34:57 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:34:57.009915 | orchestrator | 2025-07-12 20:34:57 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:34:57.009972 | orchestrator | 2025-07-12 20:34:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:00.043897 | orchestrator | 2025-07-12 20:35:00 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:35:00.044132 | orchestrator | 2025-07-12 20:35:00 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:00.045291 | orchestrator | 2025-07-12 20:35:00 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:00.045955 | orchestrator | 2025-07-12 20:35:00 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:00.045999 | orchestrator | 2025-07-12 20:35:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:03.092532 | orchestrator | 2025-07-12 20:35:03 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state STARTED 2025-07-12 20:35:03.093066 | orchestrator | 2025-07-12 20:35:03 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:03.095006 | orchestrator | 2025-07-12 20:35:03 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:03.096292 | orchestrator | 2025-07-12 20:35:03 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:03.096331 | orchestrator | 2025-07-12 20:35:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:06.128932 | orchestrator | 2025-07-12 20:35:06 | INFO  | Task fd22be83-3866-4894-b31a-eaf54ba5ecec is in state SUCCESS 2025-07-12 20:35:06.132341 | orchestrator | 2025-07-12 20:35:06.132410 | orchestrator | 2025-07-12 20:35:06.132421 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:35:06.132429 | orchestrator | 2025-07-12 20:35:06.132435 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:35:06.132443 | orchestrator | Saturday 12 July 2025 20:32:46 +0000 (0:00:00.276) 0:00:00.276 ********* 2025-07-12 20:35:06.132470 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:35:06.132479 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:35:06.132503 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:35:06.132511 | orchestrator | 2025-07-12 20:35:06.132565 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:35:06.132572 | orchestrator | Saturday 12 July 2025 20:32:47 +0000 (0:00:00.307) 0:00:00.583 ********* 2025-07-12 20:35:06.132576 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-07-12 20:35:06.132581 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-07-12 20:35:06.132585 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-07-12 20:35:06.132589 | orchestrator | 2025-07-12 20:35:06.132593 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-07-12 20:35:06.132596 | orchestrator | 2025-07-12 20:35:06.132600 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 20:35:06.132604 | orchestrator | Saturday 12 July 2025 20:32:47 +0000 (0:00:00.419) 0:00:01.003 ********* 2025-07-12 20:35:06.132608 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:35:06.132613 | orchestrator | 2025-07-12 20:35:06.132629 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-07-12 20:35:06.132632 | orchestrator | Saturday 12 July 2025 20:32:48 +0000 (0:00:00.558) 0:00:01.561 ********* 2025-07-12 20:35:06.132637 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-07-12 20:35:06.132641 | orchestrator | 2025-07-12 20:35:06.132644 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-07-12 20:35:06.132664 | orchestrator | Saturday 12 July 2025 20:32:51 +0000 (0:00:03.324) 0:00:04.886 ********* 2025-07-12 20:35:06.132669 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-07-12 20:35:06.132673 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-07-12 20:35:06.132677 | orchestrator | 2025-07-12 20:35:06.132732 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-07-12 20:35:06.132737 | orchestrator | Saturday 12 July 2025 20:32:57 +0000 (0:00:06.237) 0:00:11.124 ********* 2025-07-12 20:35:06.132741 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:35:06.132745 | orchestrator | 2025-07-12 20:35:06.132748 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-07-12 20:35:06.132752 | orchestrator | Saturday 12 July 2025 20:33:01 +0000 (0:00:03.219) 0:00:14.344 ********* 2025-07-12 20:35:06.132756 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:35:06.132763 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-07-12 20:35:06.132770 | orchestrator | 2025-07-12 20:35:06.132776 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-07-12 20:35:06.132782 | orchestrator | Saturday 12 July 2025 20:33:04 +0000 (0:00:03.587) 0:00:17.931 ********* 2025-07-12 20:35:06.132789 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:35:06.132795 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-07-12 20:35:06.132871 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-07-12 20:35:06.132878 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-07-12 20:35:06.132885 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-07-12 20:35:06.132891 | orchestrator | 2025-07-12 20:35:06.132897 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-07-12 20:35:06.132903 | orchestrator | Saturday 12 July 2025 20:33:19 +0000 (0:00:14.551) 0:00:32.482 ********* 2025-07-12 20:35:06.132909 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-07-12 20:35:06.132915 | orchestrator | 2025-07-12 20:35:06.132919 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-07-12 20:35:06.132932 | orchestrator | Saturday 12 July 2025 20:33:23 +0000 (0:00:04.016) 0:00:36.499 ********* 2025-07-12 20:35:06.132941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.132962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.132973 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.132978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.132985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.132995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133040 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133044 | orchestrator | 2025-07-12 20:35:06.133049 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-07-12 20:35:06.133053 | orchestrator | Saturday 12 July 2025 20:33:25 +0000 (0:00:02.448) 0:00:38.947 ********* 2025-07-12 20:35:06.133058 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-07-12 20:35:06.133062 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-07-12 20:35:06.133066 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-07-12 20:35:06.133071 | orchestrator | 2025-07-12 20:35:06.133075 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-07-12 20:35:06.133079 | orchestrator | Saturday 12 July 2025 20:33:27 +0000 (0:00:01.628) 0:00:40.575 ********* 2025-07-12 20:35:06.133083 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:35:06.133088 | orchestrator | 2025-07-12 20:35:06.133092 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-07-12 20:35:06.133097 | orchestrator | Saturday 12 July 2025 20:33:27 +0000 (0:00:00.274) 0:00:40.850 ********* 2025-07-12 20:35:06.133101 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:35:06.133110 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:35:06.133114 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:35:06.133119 | orchestrator | 2025-07-12 20:35:06.133123 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 20:35:06.133128 | orchestrator | Saturday 12 July 2025 20:33:28 +0000 (0:00:00.730) 0:00:41.581 ********* 2025-07-12 20:35:06.133132 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:35:06.133136 | orchestrator | 2025-07-12 20:35:06.133140 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-07-12 20:35:06.133145 | orchestrator | Saturday 12 July 2025 20:33:29 +0000 (0:00:01.092) 0:00:42.674 ********* 2025-07-12 20:35:06.133149 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.133158 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.133166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.133171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133182 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133214 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133220 | orchestrator | 2025-07-12 20:35:06.133226 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-07-12 20:35:06.133232 | orchestrator | Saturday 12 July 2025 20:33:33 +0000 (0:00:04.154) 0:00:46.828 ********* 2025-07-12 20:35:06.133238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:35:06.133249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133262 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:35:06.133272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:35:06.133279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:35:06.133287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133294 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:35:06.133298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133306 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:35:06.133310 | orchestrator | 2025-07-12 20:35:06.133317 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-07-12 20:35:06.133321 | orchestrator | Saturday 12 July 2025 20:33:35 +0000 (0:00:01.954) 0:00:48.783 ********* 2025-07-12 20:35:06.133325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:35:06.133336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133345 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133349 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:35:06.133353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:35:06.133357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133369 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:35:06.133375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:35:06.133384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133391 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:35:06.133395 | orchestrator | 2025-07-12 20:35:06.133399 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-07-12 20:35:06.133403 | orchestrator | Saturday 12 July 2025 20:33:36 +0000 (0:00:01.034) 0:00:49.817 ********* 2025-07-12 20:35:06.133407 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.133414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-ap2025-07-12 20:35:06 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:06.133661 | orchestrator | i:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.133695 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.133701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133720 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133724 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133738 | orchestrator | 2025-07-12 20:35:06.133742 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-07-12 20:35:06.133746 | orchestrator | Saturday 12 July 2025 20:33:40 +0000 (0:00:03.548) 0:00:53.366 ********* 2025-07-12 20:35:06.133750 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:35:06.133754 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:35:06.133758 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:35:06.133761 | orchestrator | 2025-07-12 20:35:06.133765 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-07-12 20:35:06.133769 | orchestrator | Saturday 12 July 2025 20:33:44 +0000 (0:00:04.054) 0:00:57.421 ********* 2025-07-12 20:35:06.133773 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:35:06.133777 | orchestrator | 2025-07-12 20:35:06.133780 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-07-12 20:35:06.133784 | orchestrator | Saturday 12 July 2025 20:33:45 +0000 (0:00:01.427) 0:00:58.849 ********* 2025-07-12 20:35:06.133788 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:35:06.133791 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:35:06.133795 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:35:06.133823 | orchestrator | 2025-07-12 20:35:06.133827 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-07-12 20:35:06.133831 | orchestrator | Saturday 12 July 2025 20:33:46 +0000 (0:00:01.170) 0:01:00.019 ********* 2025-07-12 20:35:06.133835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.133843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.133853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.133858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133866 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.133903 | orchestrator | 2025-07-12 20:35:06.133907 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-07-12 20:35:06.133911 | orchestrator | Saturday 12 July 2025 20:33:57 +0000 (0:00:11.209) 0:01:11.229 ********* 2025-07-12 20:35:06.133917 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:35:06.133921 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133929 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:35:06.133936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:35:06.133945 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133955 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:35:06.133959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-07-12 20:35:06.133963 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:35:06.133975 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:35:06.133979 | orchestrator | 2025-07-12 20:35:06.133983 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-07-12 20:35:06.133986 | orchestrator | Saturday 12 July 2025 20:34:00 +0000 (0:00:02.684) 0:01:13.914 ********* 2025-07-12 20:35:06.133994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.134003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.134007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-07-12 20:35:06.134011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.134054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.134079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.134083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.134090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.134094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:35:06.134098 | orchestrator | 2025-07-12 20:35:06.134102 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-07-12 20:35:06.134106 | orchestrator | Saturday 12 July 2025 20:34:04 +0000 (0:00:03.950) 0:01:17.864 ********* 2025-07-12 20:35:06.134110 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:35:06.134113 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:35:06.134117 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:35:06.134121 | orchestrator | 2025-07-12 20:35:06.134125 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-07-12 20:35:06.134128 | orchestrator | Saturday 12 July 2025 20:34:05 +0000 (0:00:00.862) 0:01:18.726 ********* 2025-07-12 20:35:06.134136 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:35:06.134140 | orchestrator | 2025-07-12 20:35:06.134143 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-07-12 20:35:06.134147 | orchestrator | Saturday 12 July 2025 20:34:07 +0000 (0:00:02.099) 0:01:20.826 ********* 2025-07-12 20:35:06.134151 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:35:06.134154 | orchestrator | 2025-07-12 20:35:06.134158 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-07-12 20:35:06.134162 | orchestrator | Saturday 12 July 2025 20:34:09 +0000 (0:00:02.038) 0:01:22.865 ********* 2025-07-12 20:35:06.134165 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:35:06.134169 | orchestrator | 2025-07-12 20:35:06.134173 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 20:35:06.134176 | orchestrator | Saturday 12 July 2025 20:34:21 +0000 (0:00:11.648) 0:01:34.513 ********* 2025-07-12 20:35:06.134180 | orchestrator | 2025-07-12 20:35:06.134184 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 20:35:06.134187 | orchestrator | Saturday 12 July 2025 20:34:21 +0000 (0:00:00.231) 0:01:34.745 ********* 2025-07-12 20:35:06.134191 | orchestrator | 2025-07-12 20:35:06.134195 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-07-12 20:35:06.134198 | orchestrator | Saturday 12 July 2025 20:34:21 +0000 (0:00:00.222) 0:01:34.967 ********* 2025-07-12 20:35:06.134202 | orchestrator | 2025-07-12 20:35:06.134206 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-07-12 20:35:06.134209 | orchestrator | Saturday 12 July 2025 20:34:21 +0000 (0:00:00.210) 0:01:35.178 ********* 2025-07-12 20:35:06.134213 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:35:06.134217 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:35:06.134220 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:35:06.134224 | orchestrator | 2025-07-12 20:35:06.134228 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-07-12 20:35:06.134231 | orchestrator | Saturday 12 July 2025 20:34:36 +0000 (0:00:14.453) 0:01:49.631 ********* 2025-07-12 20:35:06.134235 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:35:06.134239 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:35:06.134245 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:35:06.134249 | orchestrator | 2025-07-12 20:35:06.134253 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-07-12 20:35:06.134257 | orchestrator | Saturday 12 July 2025 20:34:49 +0000 (0:00:13.211) 0:02:02.843 ********* 2025-07-12 20:35:06.134261 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:35:06.134265 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:35:06.134269 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:35:06.134273 | orchestrator | 2025-07-12 20:35:06.134278 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:35:06.134292 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:35:06.134298 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:35:06.134303 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:35:06.134307 | orchestrator | 2025-07-12 20:35:06.134311 | orchestrator | 2025-07-12 20:35:06.134315 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:35:06.134324 | orchestrator | Saturday 12 July 2025 20:35:02 +0000 (0:00:13.134) 0:02:15.978 ********* 2025-07-12 20:35:06.134328 | orchestrator | =============================================================================== 2025-07-12 20:35:06.134332 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 14.55s 2025-07-12 20:35:06.134338 | orchestrator | barbican : Restart barbican-api container ------------------------------ 14.45s 2025-07-12 20:35:06.134348 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 13.21s 2025-07-12 20:35:06.134354 | orchestrator | barbican : Restart barbican-worker container --------------------------- 13.14s 2025-07-12 20:35:06.134360 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.65s 2025-07-12 20:35:06.134366 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.21s 2025-07-12 20:35:06.134372 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.24s 2025-07-12 20:35:06.134378 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.15s 2025-07-12 20:35:06.134384 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.05s 2025-07-12 20:35:06.134390 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.02s 2025-07-12 20:35:06.134397 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.95s 2025-07-12 20:35:06.134403 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.59s 2025-07-12 20:35:06.134409 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.55s 2025-07-12 20:35:06.134415 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.32s 2025-07-12 20:35:06.134422 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.22s 2025-07-12 20:35:06.134426 | orchestrator | barbican : Copying over existing policy file ---------------------------- 2.68s 2025-07-12 20:35:06.134430 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.45s 2025-07-12 20:35:06.134435 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.10s 2025-07-12 20:35:06.134439 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.04s 2025-07-12 20:35:06.134443 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 1.95s 2025-07-12 20:35:06.134448 | orchestrator | 2025-07-12 20:35:06 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:06.134629 | orchestrator | 2025-07-12 20:35:06 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:06.135495 | orchestrator | 2025-07-12 20:35:06 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:06.135505 | orchestrator | 2025-07-12 20:35:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:09.165414 | orchestrator | 2025-07-12 20:35:09 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:09.165969 | orchestrator | 2025-07-12 20:35:09 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:09.166792 | orchestrator | 2025-07-12 20:35:09 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:09.167700 | orchestrator | 2025-07-12 20:35:09 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:09.167730 | orchestrator | 2025-07-12 20:35:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:12.197579 | orchestrator | 2025-07-12 20:35:12 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:12.198145 | orchestrator | 2025-07-12 20:35:12 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:12.199077 | orchestrator | 2025-07-12 20:35:12 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:12.199922 | orchestrator | 2025-07-12 20:35:12 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:12.199954 | orchestrator | 2025-07-12 20:35:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:15.237622 | orchestrator | 2025-07-12 20:35:15 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:15.238116 | orchestrator | 2025-07-12 20:35:15 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:15.238959 | orchestrator | 2025-07-12 20:35:15 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:15.239719 | orchestrator | 2025-07-12 20:35:15 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:15.239755 | orchestrator | 2025-07-12 20:35:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:18.281173 | orchestrator | 2025-07-12 20:35:18 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:18.281414 | orchestrator | 2025-07-12 20:35:18 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:18.282557 | orchestrator | 2025-07-12 20:35:18 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:18.283455 | orchestrator | 2025-07-12 20:35:18 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:18.283491 | orchestrator | 2025-07-12 20:35:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:21.329553 | orchestrator | 2025-07-12 20:35:21 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:21.329893 | orchestrator | 2025-07-12 20:35:21 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:21.330961 | orchestrator | 2025-07-12 20:35:21 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:21.331548 | orchestrator | 2025-07-12 20:35:21 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:21.333044 | orchestrator | 2025-07-12 20:35:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:24.366364 | orchestrator | 2025-07-12 20:35:24 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:24.366924 | orchestrator | 2025-07-12 20:35:24 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:24.367884 | orchestrator | 2025-07-12 20:35:24 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:24.368606 | orchestrator | 2025-07-12 20:35:24 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:24.368627 | orchestrator | 2025-07-12 20:35:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:27.413790 | orchestrator | 2025-07-12 20:35:27 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:27.414231 | orchestrator | 2025-07-12 20:35:27 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:27.415274 | orchestrator | 2025-07-12 20:35:27 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:27.416625 | orchestrator | 2025-07-12 20:35:27 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:27.416664 | orchestrator | 2025-07-12 20:35:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:30.485800 | orchestrator | 2025-07-12 20:35:30 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:30.486663 | orchestrator | 2025-07-12 20:35:30 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:30.489658 | orchestrator | 2025-07-12 20:35:30 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:30.490502 | orchestrator | 2025-07-12 20:35:30 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:30.490590 | orchestrator | 2025-07-12 20:35:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:33.532241 | orchestrator | 2025-07-12 20:35:33 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:33.533520 | orchestrator | 2025-07-12 20:35:33 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:33.536519 | orchestrator | 2025-07-12 20:35:33 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:33.536725 | orchestrator | 2025-07-12 20:35:33 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:33.537226 | orchestrator | 2025-07-12 20:35:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:36.582280 | orchestrator | 2025-07-12 20:35:36 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:36.583383 | orchestrator | 2025-07-12 20:35:36 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:36.584808 | orchestrator | 2025-07-12 20:35:36 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:36.587049 | orchestrator | 2025-07-12 20:35:36 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:36.587120 | orchestrator | 2025-07-12 20:35:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:39.636136 | orchestrator | 2025-07-12 20:35:39 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:39.636763 | orchestrator | 2025-07-12 20:35:39 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:39.637741 | orchestrator | 2025-07-12 20:35:39 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:39.640363 | orchestrator | 2025-07-12 20:35:39 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:39.640989 | orchestrator | 2025-07-12 20:35:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:42.703408 | orchestrator | 2025-07-12 20:35:42 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:42.703526 | orchestrator | 2025-07-12 20:35:42 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:42.703547 | orchestrator | 2025-07-12 20:35:42 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:42.703565 | orchestrator | 2025-07-12 20:35:42 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:42.703585 | orchestrator | 2025-07-12 20:35:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:45.740262 | orchestrator | 2025-07-12 20:35:45 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:45.740435 | orchestrator | 2025-07-12 20:35:45 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:45.740461 | orchestrator | 2025-07-12 20:35:45 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:45.740554 | orchestrator | 2025-07-12 20:35:45 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:45.740570 | orchestrator | 2025-07-12 20:35:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:48.782292 | orchestrator | 2025-07-12 20:35:48 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:48.784255 | orchestrator | 2025-07-12 20:35:48 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:48.790295 | orchestrator | 2025-07-12 20:35:48 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:48.792191 | orchestrator | 2025-07-12 20:35:48 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:48.792217 | orchestrator | 2025-07-12 20:35:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:51.831652 | orchestrator | 2025-07-12 20:35:51 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:51.832134 | orchestrator | 2025-07-12 20:35:51 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:51.832675 | orchestrator | 2025-07-12 20:35:51 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:51.833729 | orchestrator | 2025-07-12 20:35:51 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:51.833770 | orchestrator | 2025-07-12 20:35:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:54.867464 | orchestrator | 2025-07-12 20:35:54 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:54.870381 | orchestrator | 2025-07-12 20:35:54 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:54.872114 | orchestrator | 2025-07-12 20:35:54 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:54.873786 | orchestrator | 2025-07-12 20:35:54 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:54.874094 | orchestrator | 2025-07-12 20:35:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:35:57.911237 | orchestrator | 2025-07-12 20:35:57 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:35:57.911742 | orchestrator | 2025-07-12 20:35:57 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:35:57.912784 | orchestrator | 2025-07-12 20:35:57 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:35:57.913341 | orchestrator | 2025-07-12 20:35:57 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:35:57.913784 | orchestrator | 2025-07-12 20:35:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:00.942387 | orchestrator | 2025-07-12 20:36:00 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:00.942497 | orchestrator | 2025-07-12 20:36:00 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:00.944249 | orchestrator | 2025-07-12 20:36:00 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:00.945114 | orchestrator | 2025-07-12 20:36:00 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state STARTED 2025-07-12 20:36:00.945321 | orchestrator | 2025-07-12 20:36:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:03.979905 | orchestrator | 2025-07-12 20:36:03 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:03.980534 | orchestrator | 2025-07-12 20:36:03 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:03.981505 | orchestrator | 2025-07-12 20:36:03 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:03.982732 | orchestrator | 2025-07-12 20:36:03 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:03.983522 | orchestrator | 2025-07-12 20:36:03 | INFO  | Task 2f46d3dc-e5c8-4eb1-b543-af5f0be37348 is in state SUCCESS 2025-07-12 20:36:03.984733 | orchestrator | 2025-07-12 20:36:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:07.025895 | orchestrator | 2025-07-12 20:36:07 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:07.025985 | orchestrator | 2025-07-12 20:36:07 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:07.025995 | orchestrator | 2025-07-12 20:36:07 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:07.026880 | orchestrator | 2025-07-12 20:36:07 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:07.026906 | orchestrator | 2025-07-12 20:36:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:10.070298 | orchestrator | 2025-07-12 20:36:10 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:10.072411 | orchestrator | 2025-07-12 20:36:10 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:10.074255 | orchestrator | 2025-07-12 20:36:10 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:10.078270 | orchestrator | 2025-07-12 20:36:10 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:10.078328 | orchestrator | 2025-07-12 20:36:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:13.117805 | orchestrator | 2025-07-12 20:36:13 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:13.120056 | orchestrator | 2025-07-12 20:36:13 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:13.120791 | orchestrator | 2025-07-12 20:36:13 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:13.121652 | orchestrator | 2025-07-12 20:36:13 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:13.121685 | orchestrator | 2025-07-12 20:36:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:16.165755 | orchestrator | 2025-07-12 20:36:16 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:16.167562 | orchestrator | 2025-07-12 20:36:16 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:16.170215 | orchestrator | 2025-07-12 20:36:16 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:16.171727 | orchestrator | 2025-07-12 20:36:16 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:16.171760 | orchestrator | 2025-07-12 20:36:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:19.213590 | orchestrator | 2025-07-12 20:36:19 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:19.216274 | orchestrator | 2025-07-12 20:36:19 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:19.218263 | orchestrator | 2025-07-12 20:36:19 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:19.220361 | orchestrator | 2025-07-12 20:36:19 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:19.220932 | orchestrator | 2025-07-12 20:36:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:22.257973 | orchestrator | 2025-07-12 20:36:22 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:22.258541 | orchestrator | 2025-07-12 20:36:22 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:22.259362 | orchestrator | 2025-07-12 20:36:22 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:22.261125 | orchestrator | 2025-07-12 20:36:22 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:22.261163 | orchestrator | 2025-07-12 20:36:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:25.293285 | orchestrator | 2025-07-12 20:36:25 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:25.297258 | orchestrator | 2025-07-12 20:36:25 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:25.300845 | orchestrator | 2025-07-12 20:36:25 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:25.301973 | orchestrator | 2025-07-12 20:36:25 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:25.302077 | orchestrator | 2025-07-12 20:36:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:28.344351 | orchestrator | 2025-07-12 20:36:28 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:28.347162 | orchestrator | 2025-07-12 20:36:28 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:28.350134 | orchestrator | 2025-07-12 20:36:28 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:28.352334 | orchestrator | 2025-07-12 20:36:28 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:28.352363 | orchestrator | 2025-07-12 20:36:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:31.399407 | orchestrator | 2025-07-12 20:36:31 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:31.399487 | orchestrator | 2025-07-12 20:36:31 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:31.401462 | orchestrator | 2025-07-12 20:36:31 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:31.402358 | orchestrator | 2025-07-12 20:36:31 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:31.402375 | orchestrator | 2025-07-12 20:36:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:34.437553 | orchestrator | 2025-07-12 20:36:34 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:34.438766 | orchestrator | 2025-07-12 20:36:34 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:34.443633 | orchestrator | 2025-07-12 20:36:34 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:34.445153 | orchestrator | 2025-07-12 20:36:34 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:34.445190 | orchestrator | 2025-07-12 20:36:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:37.492098 | orchestrator | 2025-07-12 20:36:37 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:37.492783 | orchestrator | 2025-07-12 20:36:37 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:37.494926 | orchestrator | 2025-07-12 20:36:37 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:37.498578 | orchestrator | 2025-07-12 20:36:37 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:37.498622 | orchestrator | 2025-07-12 20:36:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:40.538337 | orchestrator | 2025-07-12 20:36:40 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:40.541354 | orchestrator | 2025-07-12 20:36:40 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:40.544244 | orchestrator | 2025-07-12 20:36:40 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:40.546543 | orchestrator | 2025-07-12 20:36:40 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:40.546794 | orchestrator | 2025-07-12 20:36:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:43.587296 | orchestrator | 2025-07-12 20:36:43 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:43.587486 | orchestrator | 2025-07-12 20:36:43 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:43.588121 | orchestrator | 2025-07-12 20:36:43 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:43.590796 | orchestrator | 2025-07-12 20:36:43 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:43.590937 | orchestrator | 2025-07-12 20:36:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:46.628635 | orchestrator | 2025-07-12 20:36:46 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:46.629011 | orchestrator | 2025-07-12 20:36:46 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:46.630788 | orchestrator | 2025-07-12 20:36:46 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:46.631262 | orchestrator | 2025-07-12 20:36:46 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:46.631280 | orchestrator | 2025-07-12 20:36:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:49.660687 | orchestrator | 2025-07-12 20:36:49 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:49.660856 | orchestrator | 2025-07-12 20:36:49 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:49.661332 | orchestrator | 2025-07-12 20:36:49 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:49.662134 | orchestrator | 2025-07-12 20:36:49 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:49.662159 | orchestrator | 2025-07-12 20:36:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:52.695070 | orchestrator | 2025-07-12 20:36:52 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:52.695261 | orchestrator | 2025-07-12 20:36:52 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:52.695777 | orchestrator | 2025-07-12 20:36:52 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:52.697064 | orchestrator | 2025-07-12 20:36:52 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:52.697087 | orchestrator | 2025-07-12 20:36:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:55.736727 | orchestrator | 2025-07-12 20:36:55 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:55.737951 | orchestrator | 2025-07-12 20:36:55 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:55.740798 | orchestrator | 2025-07-12 20:36:55 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:55.742259 | orchestrator | 2025-07-12 20:36:55 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:55.742500 | orchestrator | 2025-07-12 20:36:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:36:58.788742 | orchestrator | 2025-07-12 20:36:58 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:36:58.788844 | orchestrator | 2025-07-12 20:36:58 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:36:58.790351 | orchestrator | 2025-07-12 20:36:58 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:36:58.792350 | orchestrator | 2025-07-12 20:36:58 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state STARTED 2025-07-12 20:36:58.792540 | orchestrator | 2025-07-12 20:36:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:01.838497 | orchestrator | 2025-07-12 20:37:01 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:37:01.840266 | orchestrator | 2025-07-12 20:37:01 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:01.841758 | orchestrator | 2025-07-12 20:37:01 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:01.846129 | orchestrator | 2025-07-12 20:37:01 | INFO  | Task 3eea4c07-2c8b-4089-aeab-a17300c4debd is in state SUCCESS 2025-07-12 20:37:01.848060 | orchestrator | 2025-07-12 20:37:01.848091 | orchestrator | 2025-07-12 20:37:01.848100 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-07-12 20:37:01.848108 | orchestrator | 2025-07-12 20:37:01.848116 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-07-12 20:37:01.848124 | orchestrator | Saturday 12 July 2025 20:35:13 +0000 (0:00:00.183) 0:00:00.183 ********* 2025-07-12 20:37:01.848131 | orchestrator | changed: [localhost] 2025-07-12 20:37:01.848177 | orchestrator | 2025-07-12 20:37:01.848185 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-07-12 20:37:01.848193 | orchestrator | Saturday 12 July 2025 20:35:15 +0000 (0:00:02.232) 0:00:02.416 ********* 2025-07-12 20:37:01.848200 | orchestrator | changed: [localhost] 2025-07-12 20:37:01.848207 | orchestrator | 2025-07-12 20:37:01.848259 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-07-12 20:37:01.848269 | orchestrator | Saturday 12 July 2025 20:35:53 +0000 (0:00:38.056) 0:00:40.473 ********* 2025-07-12 20:37:01.848277 | orchestrator | changed: [localhost] 2025-07-12 20:37:01.848284 | orchestrator | 2025-07-12 20:37:01.848291 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:37:01.848298 | orchestrator | 2025-07-12 20:37:01.848305 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:37:01.848313 | orchestrator | Saturday 12 July 2025 20:35:58 +0000 (0:00:05.091) 0:00:45.564 ********* 2025-07-12 20:37:01.848320 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:37:01.848327 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:37:01.848334 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:37:01.848341 | orchestrator | 2025-07-12 20:37:01.848348 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:37:01.848355 | orchestrator | Saturday 12 July 2025 20:35:59 +0000 (0:00:00.825) 0:00:46.389 ********* 2025-07-12 20:37:01.848392 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-07-12 20:37:01.848399 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-07-12 20:37:01.848407 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-07-12 20:37:01.848414 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-07-12 20:37:01.848421 | orchestrator | 2025-07-12 20:37:01.848429 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-07-12 20:37:01.848436 | orchestrator | skipping: no hosts matched 2025-07-12 20:37:01.848444 | orchestrator | 2025-07-12 20:37:01.848451 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:37:01.848458 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:37:01.848490 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:37:01.848514 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:37:01.848522 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:37:01.848529 | orchestrator | 2025-07-12 20:37:01.848536 | orchestrator | 2025-07-12 20:37:01.848544 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:37:01.848551 | orchestrator | Saturday 12 July 2025 20:36:00 +0000 (0:00:01.257) 0:00:47.647 ********* 2025-07-12 20:37:01.848558 | orchestrator | =============================================================================== 2025-07-12 20:37:01.848565 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 38.06s 2025-07-12 20:37:01.848572 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.09s 2025-07-12 20:37:01.848579 | orchestrator | Ensure the destination directory exists --------------------------------- 2.23s 2025-07-12 20:37:01.848587 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.26s 2025-07-12 20:37:01.848594 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.82s 2025-07-12 20:37:01.848601 | orchestrator | 2025-07-12 20:37:01.848608 | orchestrator | 2025-07-12 20:37:01.848616 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:37:01.848623 | orchestrator | 2025-07-12 20:37:01.848630 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:37:01.848637 | orchestrator | Saturday 12 July 2025 20:33:18 +0000 (0:00:00.287) 0:00:00.287 ********* 2025-07-12 20:37:01.848644 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:37:01.848652 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:37:01.848775 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:37:01.848783 | orchestrator | 2025-07-12 20:37:01.848792 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:37:01.848800 | orchestrator | Saturday 12 July 2025 20:33:18 +0000 (0:00:00.311) 0:00:00.599 ********* 2025-07-12 20:37:01.848809 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-07-12 20:37:01.848818 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-07-12 20:37:01.848826 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-07-12 20:37:01.848834 | orchestrator | 2025-07-12 20:37:01.848843 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-07-12 20:37:01.848851 | orchestrator | 2025-07-12 20:37:01.848860 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 20:37:01.848868 | orchestrator | Saturday 12 July 2025 20:33:18 +0000 (0:00:00.462) 0:00:01.062 ********* 2025-07-12 20:37:01.848876 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:37:01.848886 | orchestrator | 2025-07-12 20:37:01.848895 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-07-12 20:37:01.848924 | orchestrator | Saturday 12 July 2025 20:33:19 +0000 (0:00:00.623) 0:00:01.685 ********* 2025-07-12 20:37:01.848946 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-07-12 20:37:01.848955 | orchestrator | 2025-07-12 20:37:01.848964 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-07-12 20:37:01.848972 | orchestrator | Saturday 12 July 2025 20:33:22 +0000 (0:00:03.177) 0:00:04.863 ********* 2025-07-12 20:37:01.848980 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-07-12 20:37:01.848989 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-07-12 20:37:01.849008 | orchestrator | 2025-07-12 20:37:01.849020 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-07-12 20:37:01.849032 | orchestrator | Saturday 12 July 2025 20:33:29 +0000 (0:00:06.794) 0:00:11.657 ********* 2025-07-12 20:37:01.849045 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:37:01.849058 | orchestrator | 2025-07-12 20:37:01.849070 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-07-12 20:37:01.849082 | orchestrator | Saturday 12 July 2025 20:33:32 +0000 (0:00:03.300) 0:00:14.957 ********* 2025-07-12 20:37:01.849091 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:37:01.849098 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-07-12 20:37:01.849105 | orchestrator | 2025-07-12 20:37:01.849112 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-07-12 20:37:01.849119 | orchestrator | Saturday 12 July 2025 20:33:36 +0000 (0:00:03.336) 0:00:18.294 ********* 2025-07-12 20:37:01.849127 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:37:01.849134 | orchestrator | 2025-07-12 20:37:01.849141 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-07-12 20:37:01.849148 | orchestrator | Saturday 12 July 2025 20:33:39 +0000 (0:00:03.307) 0:00:21.602 ********* 2025-07-12 20:37:01.849155 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-07-12 20:37:01.849162 | orchestrator | 2025-07-12 20:37:01.849169 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-07-12 20:37:01.849176 | orchestrator | Saturday 12 July 2025 20:33:43 +0000 (0:00:04.054) 0:00:25.656 ********* 2025-07-12 20:37:01.849216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.849245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.849262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.849282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849291 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849315 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849323 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849377 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849384 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849420 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849428 | orchestrator | 2025-07-12 20:37:01.849435 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-07-12 20:37:01.849443 | orchestrator | Saturday 12 July 2025 20:33:47 +0000 (0:00:03.589) 0:00:29.246 ********* 2025-07-12 20:37:01.849450 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:01.849457 | orchestrator | 2025-07-12 20:37:01.849465 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-07-12 20:37:01.849472 | orchestrator | Saturday 12 July 2025 20:33:47 +0000 (0:00:00.373) 0:00:29.619 ********* 2025-07-12 20:37:01.849479 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:01.849486 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:01.849493 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:01.849501 | orchestrator | 2025-07-12 20:37:01.849508 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 20:37:01.849515 | orchestrator | Saturday 12 July 2025 20:33:48 +0000 (0:00:00.982) 0:00:30.602 ********* 2025-07-12 20:37:01.849522 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:37:01.849529 | orchestrator | 2025-07-12 20:37:01.849537 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-07-12 20:37:01.849544 | orchestrator | Saturday 12 July 2025 20:33:50 +0000 (0:00:02.356) 0:00:32.958 ********* 2025-07-12 20:37:01.849551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.849564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.849581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.849589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849612 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849660 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849683 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849708 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849725 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.849748 | orchestrator | 2025-07-12 20:37:01.849756 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-07-12 20:37:01.849851 | orchestrator | Saturday 12 July 2025 20:33:58 +0000 (0:00:07.631) 0:00:40.590 ********* 2025-07-12 20:37:01.849860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.849873 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:37:01.850437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850475 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850491 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:01.850499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.850515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:37:01.850523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850549 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850557 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850564 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:01.850572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.850585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:37:01.850593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850605 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850616 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850632 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:01.850644 | orchestrator | 2025-07-12 20:37:01.850652 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-07-12 20:37:01.850659 | orchestrator | Saturday 12 July 2025 20:34:00 +0000 (0:00:02.311) 0:00:42.902 ********* 2025-07-12 20:37:01.850667 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.850674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:37:01.850682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850697 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850705 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850726 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:01.850734 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.850742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:37:01.850749 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850780 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850793 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:01.850801 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.850809 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:37:01.850816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850872 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.850885 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:01.850892 | orchestrator | 2025-07-12 20:37:01.850947 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-07-12 20:37:01.850957 | orchestrator | Saturday 12 July 2025 20:34:03 +0000 (0:00:02.280) 0:00:45.182 ********* 2025-07-12 20:37:01.850965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.850973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.850986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.851002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851033 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851089 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851102 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851196 | orchestrator | 2025-07-12 20:37:01.851203 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-07-12 20:37:01.851210 | orchestrator | Saturday 12 July 2025 20:34:10 +0000 (0:00:07.612) 0:00:52.795 ********* 2025-07-12 20:37:01.851218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.851226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.851237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.851254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851307 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851320 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851328 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851350 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851393 | orchestrator | 2025-07-12 20:37:01.851400 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-07-12 20:37:01.851407 | orchestrator | Saturday 12 July 2025 20:34:43 +0000 (0:00:32.443) 0:01:25.238 ********* 2025-07-12 20:37:01.851414 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 20:37:01.851422 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 20:37:01.851429 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-07-12 20:37:01.851436 | orchestrator | 2025-07-12 20:37:01.851443 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-07-12 20:37:01.851450 | orchestrator | Saturday 12 July 2025 20:34:51 +0000 (0:00:08.597) 0:01:33.837 ********* 2025-07-12 20:37:01.851457 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 20:37:01.851464 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 20:37:01.851471 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-07-12 20:37:01.851478 | orchestrator | 2025-07-12 20:37:01.851485 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-07-12 20:37:01.851492 | orchestrator | Saturday 12 July 2025 20:34:56 +0000 (0:00:04.953) 0:01:38.791 ********* 2025-07-12 20:37:01.851500 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.851512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.851531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.851539 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851554 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851581 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851618 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851686 | orchestrator | 2025-07-12 20:37:01.851694 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-07-12 20:37:01.851701 | orchestrator | Saturday 12 July 2025 20:35:00 +0000 (0:00:03.831) 0:01:42.624 ********* 2025-07-12 20:37:01.851708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.851721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.851737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.851745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851768 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-cen2025-07-12 20:37:01 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:01.851931 | orchestrator | 2025-07-12 20:37:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:01.851944 | orchestrator | tral', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851953 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.851983 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.851990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852036 | orchestrator | 2025-07-12 20:37:01.852044 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 20:37:01.852069 | orchestrator | Saturday 12 July 2025 20:35:04 +0000 (0:00:04.129) 0:01:46.753 ********* 2025-07-12 20:37:01.852077 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:01.852084 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:01.852091 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:01.852101 | orchestrator | 2025-07-12 20:37:01.852113 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-07-12 20:37:01.852125 | orchestrator | Saturday 12 July 2025 20:35:06 +0000 (0:00:01.535) 0:01:48.289 ********* 2025-07-12 20:37:01.852139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.852158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:37:01.852175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852192 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852214 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:01.852222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.852230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:37:01.852245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852300 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:01.852308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-07-12 20:37:01.852316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-07-12 20:37:01.852328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852348 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:37:01.852371 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:01.852379 | orchestrator | 2025-07-12 20:37:01.852386 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-07-12 20:37:01.852393 | orchestrator | Saturday 12 July 2025 20:35:08 +0000 (0:00:02.050) 0:01:50.339 ********* 2025-07-12 20:37:01.852401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.852412 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.852424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-07-12 20:37:01.852432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852460 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852472 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852574 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:37:01.852592 | orchestrator | 2025-07-12 20:37:01.852600 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-07-12 20:37:01.852608 | orchestrator | Saturday 12 July 2025 20:35:14 +0000 (0:00:05.805) 0:01:56.145 ********* 2025-07-12 20:37:01.852617 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:01.852625 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:01.852634 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:01.852642 | orchestrator | 2025-07-12 20:37:01.852650 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-07-12 20:37:01.852657 | orchestrator | Saturday 12 July 2025 20:35:14 +0000 (0:00:00.863) 0:01:57.008 ********* 2025-07-12 20:37:01.852665 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-07-12 20:37:01.852672 | orchestrator | 2025-07-12 20:37:01.852679 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-07-12 20:37:01.852686 | orchestrator | Saturday 12 July 2025 20:35:18 +0000 (0:00:03.859) 0:02:00.867 ********* 2025-07-12 20:37:01.852693 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:37:01.852700 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-07-12 20:37:01.852708 | orchestrator | 2025-07-12 20:37:01.852715 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-07-12 20:37:01.852722 | orchestrator | Saturday 12 July 2025 20:35:20 +0000 (0:00:02.102) 0:02:02.970 ********* 2025-07-12 20:37:01.852729 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:01.852736 | orchestrator | 2025-07-12 20:37:01.852743 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 20:37:01.852750 | orchestrator | Saturday 12 July 2025 20:35:37 +0000 (0:00:16.272) 0:02:19.242 ********* 2025-07-12 20:37:01.852757 | orchestrator | 2025-07-12 20:37:01.852764 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 20:37:01.852772 | orchestrator | Saturday 12 July 2025 20:35:37 +0000 (0:00:00.237) 0:02:19.480 ********* 2025-07-12 20:37:01.852779 | orchestrator | 2025-07-12 20:37:01.852786 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-07-12 20:37:01.852793 | orchestrator | Saturday 12 July 2025 20:35:37 +0000 (0:00:00.168) 0:02:19.648 ********* 2025-07-12 20:37:01.852800 | orchestrator | 2025-07-12 20:37:01.852808 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-07-12 20:37:01.852815 | orchestrator | Saturday 12 July 2025 20:35:37 +0000 (0:00:00.347) 0:02:19.996 ********* 2025-07-12 20:37:01.852822 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:37:01.852829 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:01.852836 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:37:01.852848 | orchestrator | 2025-07-12 20:37:01.852855 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-07-12 20:37:01.852862 | orchestrator | Saturday 12 July 2025 20:35:55 +0000 (0:00:17.682) 0:02:37.679 ********* 2025-07-12 20:37:01.852873 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:37:01.852881 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:01.852888 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:37:01.852895 | orchestrator | 2025-07-12 20:37:01.852950 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-07-12 20:37:01.852959 | orchestrator | Saturday 12 July 2025 20:36:10 +0000 (0:00:14.559) 0:02:52.238 ********* 2025-07-12 20:37:01.852966 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:37:01.852973 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:37:01.852980 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:01.852987 | orchestrator | 2025-07-12 20:37:01.852994 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-07-12 20:37:01.853005 | orchestrator | Saturday 12 July 2025 20:36:19 +0000 (0:00:08.969) 0:03:01.207 ********* 2025-07-12 20:37:01.853012 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:01.853020 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:37:01.853027 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:37:01.853034 | orchestrator | 2025-07-12 20:37:01.853041 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-07-12 20:37:01.853048 | orchestrator | Saturday 12 July 2025 20:36:30 +0000 (0:00:10.996) 0:03:12.204 ********* 2025-07-12 20:37:01.853055 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:37:01.853062 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:37:01.853069 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:01.853076 | orchestrator | 2025-07-12 20:37:01.853083 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-07-12 20:37:01.853090 | orchestrator | Saturday 12 July 2025 20:36:40 +0000 (0:00:10.440) 0:03:22.645 ********* 2025-07-12 20:37:01.853097 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:37:01.853103 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:37:01.853110 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:01.853116 | orchestrator | 2025-07-12 20:37:01.853124 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-07-12 20:37:01.853136 | orchestrator | Saturday 12 July 2025 20:36:50 +0000 (0:00:10.156) 0:03:32.802 ********* 2025-07-12 20:37:01.853148 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:01.853159 | orchestrator | 2025-07-12 20:37:01.853171 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:37:01.853184 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:37:01.853195 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:37:01.853206 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:37:01.853213 | orchestrator | 2025-07-12 20:37:01.853220 | orchestrator | 2025-07-12 20:37:01.853226 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:37:01.853233 | orchestrator | Saturday 12 July 2025 20:36:58 +0000 (0:00:08.080) 0:03:40.882 ********* 2025-07-12 20:37:01.853239 | orchestrator | =============================================================================== 2025-07-12 20:37:01.853246 | orchestrator | designate : Copying over designate.conf -------------------------------- 32.44s 2025-07-12 20:37:01.853252 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 17.68s 2025-07-12 20:37:01.853259 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.27s 2025-07-12 20:37:01.853265 | orchestrator | designate : Restart designate-api container ---------------------------- 14.56s 2025-07-12 20:37:01.853278 | orchestrator | designate : Restart designate-producer container ----------------------- 11.00s 2025-07-12 20:37:01.853285 | orchestrator | designate : Restart designate-mdns container --------------------------- 10.44s 2025-07-12 20:37:01.853291 | orchestrator | designate : Restart designate-worker container ------------------------- 10.16s 2025-07-12 20:37:01.853298 | orchestrator | designate : Restart designate-central container ------------------------- 8.97s 2025-07-12 20:37:01.853304 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.60s 2025-07-12 20:37:01.853311 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.08s 2025-07-12 20:37:01.853317 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 7.63s 2025-07-12 20:37:01.853324 | orchestrator | designate : Copying over config.json files for services ----------------- 7.61s 2025-07-12 20:37:01.853330 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.79s 2025-07-12 20:37:01.853337 | orchestrator | designate : Check designate containers ---------------------------------- 5.81s 2025-07-12 20:37:01.853343 | orchestrator | designate : Copying over named.conf ------------------------------------- 4.96s 2025-07-12 20:37:01.853350 | orchestrator | designate : Copying over rndc.key --------------------------------------- 4.13s 2025-07-12 20:37:01.853356 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.05s 2025-07-12 20:37:01.853362 | orchestrator | designate : Creating Designate databases -------------------------------- 3.86s 2025-07-12 20:37:01.853369 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.83s 2025-07-12 20:37:01.853376 | orchestrator | designate : Ensuring config directories exist --------------------------- 3.59s 2025-07-12 20:37:04.896884 | orchestrator | 2025-07-12 20:37:04 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:37:04.899195 | orchestrator | 2025-07-12 20:37:04 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:04.900573 | orchestrator | 2025-07-12 20:37:04 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:04.902290 | orchestrator | 2025-07-12 20:37:04 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:04.902332 | orchestrator | 2025-07-12 20:37:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:07.945691 | orchestrator | 2025-07-12 20:37:07 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:37:07.948478 | orchestrator | 2025-07-12 20:37:07 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:07.949636 | orchestrator | 2025-07-12 20:37:07 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:07.951418 | orchestrator | 2025-07-12 20:37:07 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:07.951449 | orchestrator | 2025-07-12 20:37:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:11.005624 | orchestrator | 2025-07-12 20:37:11 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:37:11.007276 | orchestrator | 2025-07-12 20:37:11 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:11.009020 | orchestrator | 2025-07-12 20:37:11 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:11.010985 | orchestrator | 2025-07-12 20:37:11 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:11.011037 | orchestrator | 2025-07-12 20:37:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:14.061616 | orchestrator | 2025-07-12 20:37:14 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state STARTED 2025-07-12 20:37:14.062426 | orchestrator | 2025-07-12 20:37:14 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:14.065519 | orchestrator | 2025-07-12 20:37:14 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:14.067000 | orchestrator | 2025-07-12 20:37:14 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:14.067102 | orchestrator | 2025-07-12 20:37:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:17.100578 | orchestrator | 2025-07-12 20:37:17 | INFO  | Task d03a99ec-fbd5-4537-b0f6-2e5a682cf673 is in state SUCCESS 2025-07-12 20:37:17.102574 | orchestrator | 2025-07-12 20:37:17.102663 | orchestrator | 2025-07-12 20:37:17.102678 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:37:17.102691 | orchestrator | 2025-07-12 20:37:17.102702 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:37:17.102713 | orchestrator | Saturday 12 July 2025 20:36:08 +0000 (0:00:00.354) 0:00:00.354 ********* 2025-07-12 20:37:17.102769 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:37:17.102784 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:37:17.102795 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:37:17.102806 | orchestrator | 2025-07-12 20:37:17.102817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:37:17.102828 | orchestrator | Saturday 12 July 2025 20:36:08 +0000 (0:00:00.320) 0:00:00.675 ********* 2025-07-12 20:37:17.102839 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-07-12 20:37:17.102851 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-07-12 20:37:17.102862 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-07-12 20:37:17.102872 | orchestrator | 2025-07-12 20:37:17.102883 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-07-12 20:37:17.102894 | orchestrator | 2025-07-12 20:37:17.102905 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 20:37:17.102966 | orchestrator | Saturday 12 July 2025 20:36:09 +0000 (0:00:00.465) 0:00:01.141 ********* 2025-07-12 20:37:17.102979 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:37:17.102990 | orchestrator | 2025-07-12 20:37:17.103002 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-07-12 20:37:17.103013 | orchestrator | Saturday 12 July 2025 20:36:09 +0000 (0:00:00.600) 0:00:01.741 ********* 2025-07-12 20:37:17.103024 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-07-12 20:37:17.103035 | orchestrator | 2025-07-12 20:37:17.103045 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-07-12 20:37:17.103068 | orchestrator | Saturday 12 July 2025 20:36:13 +0000 (0:00:03.631) 0:00:05.373 ********* 2025-07-12 20:37:17.103079 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-07-12 20:37:17.103091 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-07-12 20:37:17.103104 | orchestrator | 2025-07-12 20:37:17.103116 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-07-12 20:37:17.103129 | orchestrator | Saturday 12 July 2025 20:36:19 +0000 (0:00:06.415) 0:00:11.788 ********* 2025-07-12 20:37:17.103143 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:37:17.103155 | orchestrator | 2025-07-12 20:37:17.103168 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-07-12 20:37:17.103181 | orchestrator | Saturday 12 July 2025 20:36:22 +0000 (0:00:03.134) 0:00:14.922 ********* 2025-07-12 20:37:17.103193 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:37:17.103204 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-07-12 20:37:17.103215 | orchestrator | 2025-07-12 20:37:17.103225 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-07-12 20:37:17.103264 | orchestrator | Saturday 12 July 2025 20:36:26 +0000 (0:00:03.904) 0:00:18.827 ********* 2025-07-12 20:37:17.103276 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:37:17.103287 | orchestrator | 2025-07-12 20:37:17.103312 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-07-12 20:37:17.103323 | orchestrator | Saturday 12 July 2025 20:36:30 +0000 (0:00:03.269) 0:00:22.097 ********* 2025-07-12 20:37:17.103334 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-07-12 20:37:17.103345 | orchestrator | 2025-07-12 20:37:17.103355 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 20:37:17.103366 | orchestrator | Saturday 12 July 2025 20:36:34 +0000 (0:00:04.111) 0:00:26.208 ********* 2025-07-12 20:37:17.103377 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:17.103387 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:17.103398 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:17.103409 | orchestrator | 2025-07-12 20:37:17.103420 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-07-12 20:37:17.103431 | orchestrator | Saturday 12 July 2025 20:36:34 +0000 (0:00:00.465) 0:00:26.674 ********* 2025-07-12 20:37:17.103446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.103483 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.103496 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.103515 | orchestrator | 2025-07-12 20:37:17.103526 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-07-12 20:37:17.103537 | orchestrator | Saturday 12 July 2025 20:36:35 +0000 (0:00:00.964) 0:00:27.638 ********* 2025-07-12 20:37:17.103549 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:17.103559 | orchestrator | 2025-07-12 20:37:17.103570 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-07-12 20:37:17.103581 | orchestrator | Saturday 12 July 2025 20:36:35 +0000 (0:00:00.142) 0:00:27.781 ********* 2025-07-12 20:37:17.103592 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:17.103602 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:17.103654 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:17.103666 | orchestrator | 2025-07-12 20:37:17.103677 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-07-12 20:37:17.103688 | orchestrator | Saturday 12 July 2025 20:36:36 +0000 (0:00:00.493) 0:00:28.275 ********* 2025-07-12 20:37:17.103705 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:37:17.103717 | orchestrator | 2025-07-12 20:37:17.103728 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-07-12 20:37:17.103739 | orchestrator | Saturday 12 July 2025 20:36:36 +0000 (0:00:00.542) 0:00:28.818 ********* 2025-07-12 20:37:17.103752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.103775 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.103788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.103808 | orchestrator | 2025-07-12 20:37:17.103820 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-07-12 20:37:17.103831 | orchestrator | Saturday 12 July 2025 20:36:38 +0000 (0:00:01.456) 0:00:30.274 ********* 2025-07-12 20:37:17.103843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:37:17.103855 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:17.103871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:37:17.103884 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:17.103934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:37:17.103948 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:17.103959 | orchestrator | 2025-07-12 20:37:17.103970 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-07-12 20:37:17.103982 | orchestrator | Saturday 12 July 2025 20:36:39 +0000 (0:00:00.715) 0:00:30.990 ********* 2025-07-12 20:37:17.103993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:37:17.104015 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:17.104027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:37:17.104038 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:17.104054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:37:17.104066 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:17.104077 | orchestrator | 2025-07-12 20:37:17.104089 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-07-12 20:37:17.104105 | orchestrator | Saturday 12 July 2025 20:36:39 +0000 (0:00:00.669) 0:00:31.660 ********* 2025-07-12 20:37:17.104136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.104161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.104206 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.104225 | orchestrator | 2025-07-12 20:37:17.104243 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-07-12 20:37:17.104259 | orchestrator | Saturday 12 July 2025 20:36:41 +0000 (0:00:01.328) 0:00:32.989 ********* 2025-07-12 20:37:17.104285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.104305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.104339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.104370 | orchestrator | 2025-07-12 20:37:17.104388 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-07-12 20:37:17.104408 | orchestrator | Saturday 12 July 2025 20:36:44 +0000 (0:00:03.353) 0:00:36.342 ********* 2025-07-12 20:37:17.104420 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 20:37:17.104431 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 20:37:17.104442 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-07-12 20:37:17.104453 | orchestrator | 2025-07-12 20:37:17.104464 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-07-12 20:37:17.104475 | orchestrator | Saturday 12 July 2025 20:36:46 +0000 (0:00:01.899) 0:00:38.242 ********* 2025-07-12 20:37:17.104485 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:17.104496 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:37:17.104507 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:37:17.104518 | orchestrator | 2025-07-12 20:37:17.104529 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-07-12 20:37:17.104541 | orchestrator | Saturday 12 July 2025 20:36:47 +0000 (0:00:01.529) 0:00:39.772 ********* 2025-07-12 20:37:17.104557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:37:17.104569 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:17.104581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:37:17.104592 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:17.104621 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-07-12 20:37:17.104664 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:17.104683 | orchestrator | 2025-07-12 20:37:17.104701 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-07-12 20:37:17.104719 | orchestrator | Saturday 12 July 2025 20:36:48 +0000 (0:00:00.561) 0:00:40.333 ********* 2025-07-12 20:37:17.104735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.104761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.104781 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:12.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-07-12 20:37:17.104800 | orchestrator | 2025-07-12 20:37:17.104836 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-07-12 20:37:17.104850 | orchestrator | Saturday 12 July 2025 20:36:50 +0000 (0:00:01.982) 0:00:42.316 ********* 2025-07-12 20:37:17.104861 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:17.104872 | orchestrator | 2025-07-12 20:37:17.104883 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-07-12 20:37:17.104894 | orchestrator | Saturday 12 July 2025 20:36:52 +0000 (0:00:02.522) 0:00:44.839 ********* 2025-07-12 20:37:17.104904 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:17.104944 | orchestrator | 2025-07-12 20:37:17.104966 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-07-12 20:37:17.104983 | orchestrator | Saturday 12 July 2025 20:36:55 +0000 (0:00:02.474) 0:00:47.313 ********* 2025-07-12 20:37:17.105003 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:17.105015 | orchestrator | 2025-07-12 20:37:17.105026 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 20:37:17.105052 | orchestrator | Saturday 12 July 2025 20:37:07 +0000 (0:00:12.157) 0:00:59.471 ********* 2025-07-12 20:37:17.105073 | orchestrator | 2025-07-12 20:37:17.105085 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 20:37:17.105096 | orchestrator | Saturday 12 July 2025 20:37:07 +0000 (0:00:00.067) 0:00:59.538 ********* 2025-07-12 20:37:17.105107 | orchestrator | 2025-07-12 20:37:17.105117 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-07-12 20:37:17.105128 | orchestrator | Saturday 12 July 2025 20:37:07 +0000 (0:00:00.064) 0:00:59.603 ********* 2025-07-12 20:37:17.105139 | orchestrator | 2025-07-12 20:37:17.105150 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-07-12 20:37:17.105161 | orchestrator | Saturday 12 July 2025 20:37:07 +0000 (0:00:00.066) 0:00:59.669 ********* 2025-07-12 20:37:17.105172 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:37:17.105183 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:37:17.105193 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:17.105204 | orchestrator | 2025-07-12 20:37:17.105215 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:37:17.105228 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:37:17.105240 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:37:17.105251 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:37:17.105327 | orchestrator | 2025-07-12 20:37:17.105340 | orchestrator | 2025-07-12 20:37:17.105351 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:37:17.105364 | orchestrator | Saturday 12 July 2025 20:37:15 +0000 (0:00:07.907) 0:01:07.576 ********* 2025-07-12 20:37:17.105383 | orchestrator | =============================================================================== 2025-07-12 20:37:17.105399 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.16s 2025-07-12 20:37:17.105409 | orchestrator | placement : Restart placement-api container ----------------------------- 7.91s 2025-07-12 20:37:17.105421 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.42s 2025-07-12 20:37:17.105431 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.11s 2025-07-12 20:37:17.105442 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.90s 2025-07-12 20:37:17.105452 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.63s 2025-07-12 20:37:17.105463 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.35s 2025-07-12 20:37:17.105474 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.27s 2025-07-12 20:37:17.105495 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.14s 2025-07-12 20:37:17.105505 | orchestrator | placement : Creating placement databases -------------------------------- 2.52s 2025-07-12 20:37:17.105523 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.47s 2025-07-12 20:37:17.105534 | orchestrator | placement : Check placement containers ---------------------------------- 1.98s 2025-07-12 20:37:17.105545 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.90s 2025-07-12 20:37:17.105556 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.53s 2025-07-12 20:37:17.105566 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.46s 2025-07-12 20:37:17.105577 | orchestrator | placement : Copying over config.json files for services ----------------- 1.33s 2025-07-12 20:37:17.105587 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.96s 2025-07-12 20:37:17.105598 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.72s 2025-07-12 20:37:17.105609 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.67s 2025-07-12 20:37:17.105620 | orchestrator | placement : include_tasks ----------------------------------------------- 0.60s 2025-07-12 20:37:17.105632 | orchestrator | 2025-07-12 20:37:17 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:17.105784 | orchestrator | 2025-07-12 20:37:17 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:17.105802 | orchestrator | 2025-07-12 20:37:17 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:17.107035 | orchestrator | 2025-07-12 20:37:17 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:17.107071 | orchestrator | 2025-07-12 20:37:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:20.142752 | orchestrator | 2025-07-12 20:37:20 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:20.144303 | orchestrator | 2025-07-12 20:37:20 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:20.144350 | orchestrator | 2025-07-12 20:37:20 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:20.145307 | orchestrator | 2025-07-12 20:37:20 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:20.145341 | orchestrator | 2025-07-12 20:37:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:23.181573 | orchestrator | 2025-07-12 20:37:23 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:23.181829 | orchestrator | 2025-07-12 20:37:23 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:23.182594 | orchestrator | 2025-07-12 20:37:23 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:23.183860 | orchestrator | 2025-07-12 20:37:23 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:23.183883 | orchestrator | 2025-07-12 20:37:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:26.230337 | orchestrator | 2025-07-12 20:37:26 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:26.230620 | orchestrator | 2025-07-12 20:37:26 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:26.231391 | orchestrator | 2025-07-12 20:37:26 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:26.232395 | orchestrator | 2025-07-12 20:37:26 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:26.232440 | orchestrator | 2025-07-12 20:37:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:29.275672 | orchestrator | 2025-07-12 20:37:29 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:29.278163 | orchestrator | 2025-07-12 20:37:29 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:29.280481 | orchestrator | 2025-07-12 20:37:29 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:29.288429 | orchestrator | 2025-07-12 20:37:29 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:29.288512 | orchestrator | 2025-07-12 20:37:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:32.334653 | orchestrator | 2025-07-12 20:37:32 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:32.338750 | orchestrator | 2025-07-12 20:37:32 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:32.341017 | orchestrator | 2025-07-12 20:37:32 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:32.342244 | orchestrator | 2025-07-12 20:37:32 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:32.342289 | orchestrator | 2025-07-12 20:37:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:35.394863 | orchestrator | 2025-07-12 20:37:35 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:35.398419 | orchestrator | 2025-07-12 20:37:35 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:35.398611 | orchestrator | 2025-07-12 20:37:35 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:35.401642 | orchestrator | 2025-07-12 20:37:35 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:35.401851 | orchestrator | 2025-07-12 20:37:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:38.455561 | orchestrator | 2025-07-12 20:37:38 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:38.458524 | orchestrator | 2025-07-12 20:37:38 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:38.464434 | orchestrator | 2025-07-12 20:37:38 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:38.466403 | orchestrator | 2025-07-12 20:37:38 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:38.466430 | orchestrator | 2025-07-12 20:37:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:41.509740 | orchestrator | 2025-07-12 20:37:41 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:41.509849 | orchestrator | 2025-07-12 20:37:41 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:41.514474 | orchestrator | 2025-07-12 20:37:41 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:41.514842 | orchestrator | 2025-07-12 20:37:41 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:41.514900 | orchestrator | 2025-07-12 20:37:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:44.563042 | orchestrator | 2025-07-12 20:37:44 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:44.565670 | orchestrator | 2025-07-12 20:37:44 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:44.567854 | orchestrator | 2025-07-12 20:37:44 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:44.570369 | orchestrator | 2025-07-12 20:37:44 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:44.570410 | orchestrator | 2025-07-12 20:37:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:47.617582 | orchestrator | 2025-07-12 20:37:47 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:47.617812 | orchestrator | 2025-07-12 20:37:47 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state STARTED 2025-07-12 20:37:47.618779 | orchestrator | 2025-07-12 20:37:47 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:47.621038 | orchestrator | 2025-07-12 20:37:47 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:47.621088 | orchestrator | 2025-07-12 20:37:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:50.665467 | orchestrator | 2025-07-12 20:37:50 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:50.670215 | orchestrator | 2025-07-12 20:37:50 | INFO  | Task 9053c53c-cf21-44f3-af8e-7dbf1cf36098 is in state SUCCESS 2025-07-12 20:37:50.672676 | orchestrator | 2025-07-12 20:37:50.672750 | orchestrator | 2025-07-12 20:37:50.672759 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:37:50.672766 | orchestrator | 2025-07-12 20:37:50.672773 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:37:50.672779 | orchestrator | Saturday 12 July 2025 20:32:29 +0000 (0:00:00.269) 0:00:00.269 ********* 2025-07-12 20:37:50.672786 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:37:50.672793 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:37:50.672799 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:37:50.672804 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:37:50.672810 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:37:50.672816 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:37:50.672822 | orchestrator | 2025-07-12 20:37:50.672827 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:37:50.672833 | orchestrator | Saturday 12 July 2025 20:32:30 +0000 (0:00:00.771) 0:00:01.041 ********* 2025-07-12 20:37:50.672839 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-07-12 20:37:50.672845 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-07-12 20:37:50.672851 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-07-12 20:37:50.672856 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-07-12 20:37:50.672875 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-07-12 20:37:50.672882 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-07-12 20:37:50.672888 | orchestrator | 2025-07-12 20:37:50.672895 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-07-12 20:37:50.672901 | orchestrator | 2025-07-12 20:37:50.672907 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 20:37:50.672914 | orchestrator | Saturday 12 July 2025 20:32:31 +0000 (0:00:00.666) 0:00:01.708 ********* 2025-07-12 20:37:50.672921 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:37:50.672929 | orchestrator | 2025-07-12 20:37:50.672935 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-07-12 20:37:50.672941 | orchestrator | Saturday 12 July 2025 20:32:32 +0000 (0:00:01.282) 0:00:02.990 ********* 2025-07-12 20:37:50.672947 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:37:50.673087 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:37:50.673096 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:37:50.673101 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:37:50.673107 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:37:50.673112 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:37:50.673137 | orchestrator | 2025-07-12 20:37:50.673144 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-07-12 20:37:50.673150 | orchestrator | Saturday 12 July 2025 20:32:33 +0000 (0:00:01.293) 0:00:04.283 ********* 2025-07-12 20:37:50.673156 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:37:50.673209 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:37:50.673216 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:37:50.673222 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:37:50.673228 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:37:50.673235 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:37:50.673242 | orchestrator | 2025-07-12 20:37:50.673249 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-07-12 20:37:50.673256 | orchestrator | Saturday 12 July 2025 20:32:35 +0000 (0:00:01.104) 0:00:05.387 ********* 2025-07-12 20:37:50.673263 | orchestrator | ok: [testbed-node-0] => { 2025-07-12 20:37:50.673270 | orchestrator |  "changed": false, 2025-07-12 20:37:50.673277 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:37:50.673283 | orchestrator | } 2025-07-12 20:37:50.673290 | orchestrator | ok: [testbed-node-1] => { 2025-07-12 20:37:50.673297 | orchestrator |  "changed": false, 2025-07-12 20:37:50.673303 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:37:50.673309 | orchestrator | } 2025-07-12 20:37:50.673316 | orchestrator | ok: [testbed-node-2] => { 2025-07-12 20:37:50.673322 | orchestrator |  "changed": false, 2025-07-12 20:37:50.673329 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:37:50.673336 | orchestrator | } 2025-07-12 20:37:50.673342 | orchestrator | ok: [testbed-node-3] => { 2025-07-12 20:37:50.673348 | orchestrator |  "changed": false, 2025-07-12 20:37:50.673355 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:37:50.673361 | orchestrator | } 2025-07-12 20:37:50.673368 | orchestrator | ok: [testbed-node-4] => { 2025-07-12 20:37:50.673374 | orchestrator |  "changed": false, 2025-07-12 20:37:50.673381 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:37:50.673387 | orchestrator | } 2025-07-12 20:37:50.673393 | orchestrator | ok: [testbed-node-5] => { 2025-07-12 20:37:50.673400 | orchestrator |  "changed": false, 2025-07-12 20:37:50.673407 | orchestrator |  "msg": "All assertions passed" 2025-07-12 20:37:50.673414 | orchestrator | } 2025-07-12 20:37:50.673420 | orchestrator | 2025-07-12 20:37:50.673427 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-07-12 20:37:50.673434 | orchestrator | Saturday 12 July 2025 20:32:35 +0000 (0:00:00.878) 0:00:06.266 ********* 2025-07-12 20:37:50.673441 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.673448 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.673455 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.673461 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.673468 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.673474 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.673481 | orchestrator | 2025-07-12 20:37:50.673487 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-07-12 20:37:50.673494 | orchestrator | Saturday 12 July 2025 20:32:36 +0000 (0:00:00.810) 0:00:07.077 ********* 2025-07-12 20:37:50.673501 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-07-12 20:37:50.673516 | orchestrator | 2025-07-12 20:37:50.673523 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-07-12 20:37:50.673530 | orchestrator | Saturday 12 July 2025 20:32:40 +0000 (0:00:03.286) 0:00:10.363 ********* 2025-07-12 20:37:50.673537 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-07-12 20:37:50.673544 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-07-12 20:37:50.673551 | orchestrator | 2025-07-12 20:37:50.673571 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-07-12 20:37:50.673578 | orchestrator | Saturday 12 July 2025 20:32:45 +0000 (0:00:05.560) 0:00:15.924 ********* 2025-07-12 20:37:50.673592 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:37:50.673599 | orchestrator | 2025-07-12 20:37:50.673605 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-07-12 20:37:50.673612 | orchestrator | Saturday 12 July 2025 20:32:48 +0000 (0:00:02.875) 0:00:18.799 ********* 2025-07-12 20:37:50.673619 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:37:50.673625 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-07-12 20:37:50.673632 | orchestrator | 2025-07-12 20:37:50.673638 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-07-12 20:37:50.673643 | orchestrator | Saturday 12 July 2025 20:32:52 +0000 (0:00:03.994) 0:00:22.793 ********* 2025-07-12 20:37:50.673649 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:37:50.673655 | orchestrator | 2025-07-12 20:37:50.673662 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-07-12 20:37:50.673668 | orchestrator | Saturday 12 July 2025 20:32:56 +0000 (0:00:03.528) 0:00:26.322 ********* 2025-07-12 20:37:50.673678 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-07-12 20:37:50.673684 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-07-12 20:37:50.673690 | orchestrator | 2025-07-12 20:37:50.673696 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 20:37:50.673702 | orchestrator | Saturday 12 July 2025 20:33:03 +0000 (0:00:07.536) 0:00:33.858 ********* 2025-07-12 20:37:50.673707 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.673713 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.673719 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.673725 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.673730 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.673736 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.673742 | orchestrator | 2025-07-12 20:37:50.673748 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-07-12 20:37:50.673754 | orchestrator | Saturday 12 July 2025 20:33:04 +0000 (0:00:01.190) 0:00:35.049 ********* 2025-07-12 20:37:50.673760 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.673765 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.673771 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.673777 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.673782 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.673788 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.673794 | orchestrator | 2025-07-12 20:37:50.673800 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-07-12 20:37:50.673805 | orchestrator | Saturday 12 July 2025 20:33:07 +0000 (0:00:02.887) 0:00:37.936 ********* 2025-07-12 20:37:50.673812 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:37:50.673817 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:37:50.673823 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:37:50.673829 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:37:50.673834 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:37:50.673840 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:37:50.673846 | orchestrator | 2025-07-12 20:37:50.673852 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-07-12 20:37:50.673858 | orchestrator | Saturday 12 July 2025 20:33:09 +0000 (0:00:01.601) 0:00:39.537 ********* 2025-07-12 20:37:50.673864 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.673870 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.673876 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.673881 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.673887 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.673893 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.673899 | orchestrator | 2025-07-12 20:37:50.673905 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-07-12 20:37:50.673911 | orchestrator | Saturday 12 July 2025 20:33:12 +0000 (0:00:03.699) 0:00:43.237 ********* 2025-07-12 20:37:50.673925 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.673941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.673987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.674003 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.674062 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.674081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.674087 | orchestrator | 2025-07-12 20:37:50.674093 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-07-12 20:37:50.674099 | orchestrator | Saturday 12 July 2025 20:33:17 +0000 (0:00:04.617) 0:00:47.855 ********* 2025-07-12 20:37:50.674106 | orchestrator | [WARNING]: Skipped 2025-07-12 20:37:50.674112 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-07-12 20:37:50.674118 | orchestrator | due to this access issue: 2025-07-12 20:37:50.674125 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-07-12 20:37:50.674131 | orchestrator | a directory 2025-07-12 20:37:50.674137 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:37:50.674144 | orchestrator | 2025-07-12 20:37:50.674157 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 20:37:50.674163 | orchestrator | Saturday 12 July 2025 20:33:18 +0000 (0:00:00.958) 0:00:48.813 ********* 2025-07-12 20:37:50.674169 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:37:50.674176 | orchestrator | 2025-07-12 20:37:50.674182 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-07-12 20:37:50.674188 | orchestrator | Saturday 12 July 2025 20:33:19 +0000 (0:00:01.309) 0:00:50.123 ********* 2025-07-12 20:37:50.674198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.674204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.674217 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.674224 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.674235 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.674245 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.674252 | orchestrator | 2025-07-12 20:37:50.674258 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-07-12 20:37:50.674264 | orchestrator | Saturday 12 July 2025 20:33:23 +0000 (0:00:03.836) 0:00:53.959 ********* 2025-07-12 20:37:50.674270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.674281 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.674287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.674293 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.674303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.674310 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.674319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.674326 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.674332 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.674343 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.674349 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.674355 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.674361 | orchestrator | 2025-07-12 20:37:50.674367 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-07-12 20:37:50.674373 | orchestrator | Saturday 12 July 2025 20:33:28 +0000 (0:00:04.594) 0:00:58.553 ********* 2025-07-12 20:37:50.674379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.674385 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.674397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.674404 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.674413 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.674448 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.674459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.674468 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.674477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.674486 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.674495 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.674505 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.674513 | orchestrator | 2025-07-12 20:37:50.674522 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-07-12 20:37:50.674536 | orchestrator | Saturday 12 July 2025 20:33:32 +0000 (0:00:03.880) 0:01:02.434 ********* 2025-07-12 20:37:50.674545 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.674554 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.674562 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.674571 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.674579 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.674587 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.674596 | orchestrator | 2025-07-12 20:37:50.674605 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-07-12 20:37:50.674613 | orchestrator | Saturday 12 July 2025 20:33:35 +0000 (0:00:03.739) 0:01:06.173 ********* 2025-07-12 20:37:50.674622 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.674631 | orchestrator | 2025-07-12 20:37:50.674640 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-07-12 20:37:50.674654 | orchestrator | Saturday 12 July 2025 20:33:36 +0000 (0:00:00.154) 0:01:06.327 ********* 2025-07-12 20:37:50.674663 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.674671 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.674680 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.674688 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.674697 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.674705 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.674714 | orchestrator | 2025-07-12 20:37:50.674727 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-07-12 20:37:50.674735 | orchestrator | Saturday 12 July 2025 20:33:37 +0000 (0:00:01.036) 0:01:07.364 ********* 2025-07-12 20:37:50.674745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.674754 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.674764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.674773 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.674782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.674791 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.674806 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.674824 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.674838 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.674847 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.674857 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.674866 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.674875 | orchestrator | 2025-07-12 20:37:50.674883 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-07-12 20:37:50.674893 | orchestrator | Saturday 12 July 2025 20:33:39 +0000 (0:00:02.652) 0:01:10.017 ********* 2025-07-12 20:37:50.674902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.674912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.674933 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.674978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.674990 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.674999 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.675008 | orchestrator | 2025-07-12 20:37:50.675017 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-07-12 20:37:50.675026 | orchestrator | Saturday 12 July 2025 20:33:44 +0000 (0:00:05.171) 0:01:15.188 ********* 2025-07-12 20:37:50.675040 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.675060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.675071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.675080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.675089 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.675109 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.675119 | orchestrator | 2025-07-12 20:37:50.675129 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-07-12 20:37:50.675138 | orchestrator | Saturday 12 July 2025 20:33:53 +0000 (0:00:08.798) 0:01:23.987 ********* 2025-07-12 20:37:50.675151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.675160 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.675170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.675179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.675188 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.675198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.675213 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.675230 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.675244 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.675253 | orchestrator | 2025-07-12 20:37:50.675262 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-07-12 20:37:50.675271 | orchestrator | Saturday 12 July 2025 20:33:57 +0000 (0:00:04.029) 0:01:28.016 ********* 2025-07-12 20:37:50.675279 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.675288 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:50.675297 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:37:50.675306 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.675314 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.675323 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:37:50.675331 | orchestrator | 2025-07-12 20:37:50.675376 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-07-12 20:37:50.675386 | orchestrator | Saturday 12 July 2025 20:34:02 +0000 (0:00:04.434) 0:01:32.451 ********* 2025-07-12 20:37:50.675395 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.675411 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.675420 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.675429 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.675446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.675456 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.675470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.675479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.675489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.675504 | orchestrator | 2025-07-12 20:37:50.675514 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-07-12 20:37:50.675523 | orchestrator | Saturday 12 July 2025 20:34:07 +0000 (0:00:05.557) 0:01:38.008 ********* 2025-07-12 20:37:50.675532 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.675540 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.675548 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.675557 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.675566 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.675574 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.675583 | orchestrator | 2025-07-12 20:37:50.675591 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-07-12 20:37:50.675600 | orchestrator | Saturday 12 July 2025 20:34:11 +0000 (0:00:03.450) 0:01:41.458 ********* 2025-07-12 20:37:50.675609 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.675617 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.675625 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.675634 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.675643 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.675651 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.675660 | orchestrator | 2025-07-12 20:37:50.675669 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-07-12 20:37:50.675677 | orchestrator | Saturday 12 July 2025 20:34:16 +0000 (0:00:05.010) 0:01:46.468 ********* 2025-07-12 20:37:50.675686 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.675694 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.675703 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.675717 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.675726 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.675734 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.675743 | orchestrator | 2025-07-12 20:37:50.675752 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-07-12 20:37:50.675760 | orchestrator | Saturday 12 July 2025 20:34:20 +0000 (0:00:04.608) 0:01:51.077 ********* 2025-07-12 20:37:50.675769 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.675777 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.675786 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.675795 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.675803 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.675811 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.675820 | orchestrator | 2025-07-12 20:37:50.675829 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-07-12 20:37:50.675838 | orchestrator | Saturday 12 July 2025 20:34:26 +0000 (0:00:05.640) 0:01:56.717 ********* 2025-07-12 20:37:50.675847 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.675855 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.675864 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.675872 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.675881 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.675889 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.675897 | orchestrator | 2025-07-12 20:37:50.675924 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-07-12 20:37:50.675934 | orchestrator | Saturday 12 July 2025 20:34:31 +0000 (0:00:05.101) 0:02:01.818 ********* 2025-07-12 20:37:50.675943 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.676028 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.676040 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.676049 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.676057 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.676066 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.676074 | orchestrator | 2025-07-12 20:37:50.676083 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-07-12 20:37:50.676091 | orchestrator | Saturday 12 July 2025 20:34:36 +0000 (0:00:05.125) 0:02:06.944 ********* 2025-07-12 20:37:50.676100 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:37:50.676109 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.676117 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:37:50.676126 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.676134 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:37:50.676143 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.676151 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:37:50.676159 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.676168 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:37:50.676176 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.676185 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-07-12 20:37:50.676194 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.676202 | orchestrator | 2025-07-12 20:37:50.676211 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-07-12 20:37:50.676219 | orchestrator | Saturday 12 July 2025 20:34:42 +0000 (0:00:05.835) 0:02:12.780 ********* 2025-07-12 20:37:50.676229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.676238 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.676254 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.676263 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.676277 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.676292 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.676301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.676310 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.676319 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.676328 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.676337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.676346 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.676355 | orchestrator | 2025-07-12 20:37:50.676364 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-07-12 20:37:50.676372 | orchestrator | Saturday 12 July 2025 20:34:47 +0000 (0:00:04.939) 0:02:17.720 ********* 2025-07-12 20:37:50.676387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.676404 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.676420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.676429 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.676438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.676447 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.676456 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.676465 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.676474 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.676489 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.676504 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.676513 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.676522 | orchestrator | 2025-07-12 20:37:50.676531 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-07-12 20:37:50.676544 | orchestrator | Saturday 12 July 2025 20:34:52 +0000 (0:00:05.554) 0:02:23.274 ********* 2025-07-12 20:37:50.676553 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.676561 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.676570 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.676578 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.676586 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.676595 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.676604 | orchestrator | 2025-07-12 20:37:50.676612 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-07-12 20:37:50.676621 | orchestrator | Saturday 12 July 2025 20:34:57 +0000 (0:00:04.657) 0:02:27.932 ********* 2025-07-12 20:37:50.676629 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.676638 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.676646 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.676655 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:37:50.676663 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:37:50.676672 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:37:50.676680 | orchestrator | 2025-07-12 20:37:50.676688 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-07-12 20:37:50.676697 | orchestrator | Saturday 12 July 2025 20:35:03 +0000 (0:00:05.911) 0:02:33.844 ********* 2025-07-12 20:37:50.676706 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.676714 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.676723 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.676731 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.676740 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.676748 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.676757 | orchestrator | 2025-07-12 20:37:50.676766 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-07-12 20:37:50.676774 | orchestrator | Saturday 12 July 2025 20:35:08 +0000 (0:00:05.146) 0:02:38.991 ********* 2025-07-12 20:37:50.676782 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.676791 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.676799 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.676808 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.676817 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.676825 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.676834 | orchestrator | 2025-07-12 20:37:50.676842 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-07-12 20:37:50.676851 | orchestrator | Saturday 12 July 2025 20:35:12 +0000 (0:00:03.942) 0:02:42.933 ********* 2025-07-12 20:37:50.676860 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.676876 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.676884 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.676893 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.676901 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.676910 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.676918 | orchestrator | 2025-07-12 20:37:50.676927 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-07-12 20:37:50.676936 | orchestrator | Saturday 12 July 2025 20:35:17 +0000 (0:00:04.427) 0:02:47.361 ********* 2025-07-12 20:37:50.676944 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.676977 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.676987 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.676995 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.677004 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.677013 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.677021 | orchestrator | 2025-07-12 20:37:50.677030 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-07-12 20:37:50.677038 | orchestrator | Saturday 12 July 2025 20:35:20 +0000 (0:00:03.421) 0:02:50.783 ********* 2025-07-12 20:37:50.677047 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.677055 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.677064 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.677072 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.677080 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.677089 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.677097 | orchestrator | 2025-07-12 20:37:50.677105 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-07-12 20:37:50.677114 | orchestrator | Saturday 12 July 2025 20:35:23 +0000 (0:00:03.415) 0:02:54.199 ********* 2025-07-12 20:37:50.677123 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.677131 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.677139 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.677148 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.677156 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.677164 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.677173 | orchestrator | 2025-07-12 20:37:50.677181 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-07-12 20:37:50.677190 | orchestrator | Saturday 12 July 2025 20:35:26 +0000 (0:00:03.074) 0:02:57.274 ********* 2025-07-12 20:37:50.677198 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.677213 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.677221 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.677230 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.677238 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.677246 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.677255 | orchestrator | 2025-07-12 20:37:50.677264 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-07-12 20:37:50.677272 | orchestrator | Saturday 12 July 2025 20:35:31 +0000 (0:00:04.401) 0:03:01.676 ********* 2025-07-12 20:37:50.677281 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.677289 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.677297 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.677306 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.677314 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.677322 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.677331 | orchestrator | 2025-07-12 20:37:50.677339 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-07-12 20:37:50.677348 | orchestrator | Saturday 12 July 2025 20:35:36 +0000 (0:00:04.873) 0:03:06.549 ********* 2025-07-12 20:37:50.677356 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:37:50.677370 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.677379 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:37:50.677394 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.677402 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:37:50.677411 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.677420 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:37:50.677428 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.677436 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:37:50.677445 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.677453 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-07-12 20:37:50.677462 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.677470 | orchestrator | 2025-07-12 20:37:50.677479 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-07-12 20:37:50.677487 | orchestrator | Saturday 12 July 2025 20:35:42 +0000 (0:00:05.773) 0:03:12.323 ********* 2025-07-12 20:37:50.677496 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.677505 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.677514 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.677523 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.677539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.677548 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.677567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-07-12 20:37:50.677576 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.677585 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.677594 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.677603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-07-12 20:37:50.677612 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.677621 | orchestrator | 2025-07-12 20:37:50.677629 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-07-12 20:37:50.677638 | orchestrator | Saturday 12 July 2025 20:35:46 +0000 (0:00:04.477) 0:03:16.801 ********* 2025-07-12 20:37:50.677647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.677662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.677683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-07-12 20:37:50.677693 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.677702 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.677711 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-07-12 20:37:50.677720 | orchestrator | 2025-07-12 20:37:50.677735 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-07-12 20:37:50.677748 | orchestrator | Saturday 12 July 2025 20:35:52 +0000 (0:00:05.782) 0:03:22.583 ********* 2025-07-12 20:37:50.677757 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:37:50.677765 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:37:50.677774 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:37:50.677782 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:37:50.677791 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:37:50.677799 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:37:50.677808 | orchestrator | 2025-07-12 20:37:50.677816 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-07-12 20:37:50.677825 | orchestrator | Saturday 12 July 2025 20:35:53 +0000 (0:00:00.873) 0:03:23.456 ********* 2025-07-12 20:37:50.677833 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:50.677842 | orchestrator | 2025-07-12 20:37:50.677850 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-07-12 20:37:50.677859 | orchestrator | Saturday 12 July 2025 20:35:55 +0000 (0:00:02.073) 0:03:25.530 ********* 2025-07-12 20:37:50.677867 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:50.677876 | orchestrator | 2025-07-12 20:37:50.677884 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-07-12 20:37:50.677893 | orchestrator | Saturday 12 July 2025 20:35:57 +0000 (0:00:02.391) 0:03:27.921 ********* 2025-07-12 20:37:50.677905 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:50.677914 | orchestrator | 2025-07-12 20:37:50.677923 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:37:50.677931 | orchestrator | Saturday 12 July 2025 20:36:43 +0000 (0:00:45.921) 0:04:13.843 ********* 2025-07-12 20:37:50.677940 | orchestrator | 2025-07-12 20:37:50.677948 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:37:50.677982 | orchestrator | Saturday 12 July 2025 20:36:43 +0000 (0:00:00.289) 0:04:14.132 ********* 2025-07-12 20:37:50.677996 | orchestrator | 2025-07-12 20:37:50.678011 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:37:50.678093 | orchestrator | Saturday 12 July 2025 20:36:43 +0000 (0:00:00.083) 0:04:14.215 ********* 2025-07-12 20:37:50.678102 | orchestrator | 2025-07-12 20:37:50.678111 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:37:50.678120 | orchestrator | Saturday 12 July 2025 20:36:43 +0000 (0:00:00.091) 0:04:14.307 ********* 2025-07-12 20:37:50.678128 | orchestrator | 2025-07-12 20:37:50.678137 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:37:50.678145 | orchestrator | Saturday 12 July 2025 20:36:44 +0000 (0:00:00.074) 0:04:14.382 ********* 2025-07-12 20:37:50.678154 | orchestrator | 2025-07-12 20:37:50.678162 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-07-12 20:37:50.678171 | orchestrator | Saturday 12 July 2025 20:36:44 +0000 (0:00:00.105) 0:04:14.487 ********* 2025-07-12 20:37:50.678179 | orchestrator | 2025-07-12 20:37:50.678188 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-07-12 20:37:50.678196 | orchestrator | Saturday 12 July 2025 20:36:44 +0000 (0:00:00.084) 0:04:14.572 ********* 2025-07-12 20:37:50.678205 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:37:50.678213 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:37:50.678222 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:37:50.678230 | orchestrator | 2025-07-12 20:37:50.678239 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-07-12 20:37:50.678247 | orchestrator | Saturday 12 July 2025 20:37:15 +0000 (0:00:31.130) 0:04:45.703 ********* 2025-07-12 20:37:50.678256 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:37:50.678265 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:37:50.678273 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:37:50.678281 | orchestrator | 2025-07-12 20:37:50.678290 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:37:50.678307 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-07-12 20:37:50.678316 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-12 20:37:50.678325 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-07-12 20:37:50.678334 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 20:37:50.678342 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 20:37:50.678351 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-07-12 20:37:50.678360 | orchestrator | 2025-07-12 20:37:50.678368 | orchestrator | 2025-07-12 20:37:50.678377 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:37:50.678386 | orchestrator | Saturday 12 July 2025 20:37:47 +0000 (0:00:31.911) 0:05:17.615 ********* 2025-07-12 20:37:50.678395 | orchestrator | =============================================================================== 2025-07-12 20:37:50.678403 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 45.92s 2025-07-12 20:37:50.678412 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 31.91s 2025-07-12 20:37:50.678421 | orchestrator | neutron : Restart neutron-server container ----------------------------- 31.13s 2025-07-12 20:37:50.678429 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 8.80s 2025-07-12 20:37:50.678445 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.54s 2025-07-12 20:37:50.678454 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.91s 2025-07-12 20:37:50.678462 | orchestrator | neutron : Copying over dnsmasq.conf ------------------------------------- 5.84s 2025-07-12 20:37:50.678471 | orchestrator | neutron : Check neutron containers -------------------------------------- 5.78s 2025-07-12 20:37:50.678479 | orchestrator | neutron : Copying over neutron-tls-proxy.cfg ---------------------------- 5.77s 2025-07-12 20:37:50.678488 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 5.64s 2025-07-12 20:37:50.678496 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 5.56s 2025-07-12 20:37:50.678505 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 5.56s 2025-07-12 20:37:50.678513 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 5.55s 2025-07-12 20:37:50.678522 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.17s 2025-07-12 20:37:50.678530 | orchestrator | neutron : Copying over neutron_ovn_vpn_agent.ini ------------------------ 5.15s 2025-07-12 20:37:50.678545 | orchestrator | neutron : Copying over dhcp_agent.ini ----------------------------------- 5.13s 2025-07-12 20:37:50.678556 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 5.10s 2025-07-12 20:37:50.678565 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 5.01s 2025-07-12 20:37:50.678575 | orchestrator | neutron : Copying over l3_agent.ini ------------------------------------- 4.94s 2025-07-12 20:37:50.678584 | orchestrator | neutron : Copying over extra ml2 plugins -------------------------------- 4.87s 2025-07-12 20:37:50.678594 | orchestrator | 2025-07-12 20:37:50 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:50.678604 | orchestrator | 2025-07-12 20:37:50 | INFO  | Task 1e6bf4bc-29fb-4a9b-adb7-1c22890f2d13 is in state STARTED 2025-07-12 20:37:50.681133 | orchestrator | 2025-07-12 20:37:50 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:50.681510 | orchestrator | 2025-07-12 20:37:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:53.711781 | orchestrator | 2025-07-12 20:37:53 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:53.714234 | orchestrator | 2025-07-12 20:37:53 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:53.714317 | orchestrator | 2025-07-12 20:37:53 | INFO  | Task 1e6bf4bc-29fb-4a9b-adb7-1c22890f2d13 is in state STARTED 2025-07-12 20:37:53.717392 | orchestrator | 2025-07-12 20:37:53 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:53.717457 | orchestrator | 2025-07-12 20:37:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:56.753309 | orchestrator | 2025-07-12 20:37:56 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:56.755615 | orchestrator | 2025-07-12 20:37:56 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:56.757661 | orchestrator | 2025-07-12 20:37:56 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:37:56.758526 | orchestrator | 2025-07-12 20:37:56 | INFO  | Task 1e6bf4bc-29fb-4a9b-adb7-1c22890f2d13 is in state SUCCESS 2025-07-12 20:37:56.761393 | orchestrator | 2025-07-12 20:37:56 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:56.761581 | orchestrator | 2025-07-12 20:37:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:37:59.799053 | orchestrator | 2025-07-12 20:37:59 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:37:59.801501 | orchestrator | 2025-07-12 20:37:59 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:37:59.802519 | orchestrator | 2025-07-12 20:37:59 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:37:59.804399 | orchestrator | 2025-07-12 20:37:59 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:37:59.804430 | orchestrator | 2025-07-12 20:37:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:02.849783 | orchestrator | 2025-07-12 20:38:02 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:02.851630 | orchestrator | 2025-07-12 20:38:02 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:02.854291 | orchestrator | 2025-07-12 20:38:02 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:02.857414 | orchestrator | 2025-07-12 20:38:02 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:02.857475 | orchestrator | 2025-07-12 20:38:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:05.905453 | orchestrator | 2025-07-12 20:38:05 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:05.906355 | orchestrator | 2025-07-12 20:38:05 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:05.908698 | orchestrator | 2025-07-12 20:38:05 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:05.910665 | orchestrator | 2025-07-12 20:38:05 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:05.910718 | orchestrator | 2025-07-12 20:38:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:08.945770 | orchestrator | 2025-07-12 20:38:08 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:08.950418 | orchestrator | 2025-07-12 20:38:08 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:08.950582 | orchestrator | 2025-07-12 20:38:08 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:08.951061 | orchestrator | 2025-07-12 20:38:08 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:08.951377 | orchestrator | 2025-07-12 20:38:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:11.993015 | orchestrator | 2025-07-12 20:38:11 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:11.994582 | orchestrator | 2025-07-12 20:38:11 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:11.996490 | orchestrator | 2025-07-12 20:38:11 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:11.998776 | orchestrator | 2025-07-12 20:38:11 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:11.999092 | orchestrator | 2025-07-12 20:38:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:15.051816 | orchestrator | 2025-07-12 20:38:15 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:15.054199 | orchestrator | 2025-07-12 20:38:15 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:15.056571 | orchestrator | 2025-07-12 20:38:15 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:15.058930 | orchestrator | 2025-07-12 20:38:15 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:15.059329 | orchestrator | 2025-07-12 20:38:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:18.103437 | orchestrator | 2025-07-12 20:38:18 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:18.104677 | orchestrator | 2025-07-12 20:38:18 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:18.106628 | orchestrator | 2025-07-12 20:38:18 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:18.108899 | orchestrator | 2025-07-12 20:38:18 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:18.109107 | orchestrator | 2025-07-12 20:38:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:21.157546 | orchestrator | 2025-07-12 20:38:21 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:21.162483 | orchestrator | 2025-07-12 20:38:21 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:21.162565 | orchestrator | 2025-07-12 20:38:21 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:21.163059 | orchestrator | 2025-07-12 20:38:21 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:21.163080 | orchestrator | 2025-07-12 20:38:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:24.213212 | orchestrator | 2025-07-12 20:38:24 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:24.216374 | orchestrator | 2025-07-12 20:38:24 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:24.218549 | orchestrator | 2025-07-12 20:38:24 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:24.220297 | orchestrator | 2025-07-12 20:38:24 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:24.220325 | orchestrator | 2025-07-12 20:38:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:27.273464 | orchestrator | 2025-07-12 20:38:27 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:27.275344 | orchestrator | 2025-07-12 20:38:27 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:27.277584 | orchestrator | 2025-07-12 20:38:27 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:27.279572 | orchestrator | 2025-07-12 20:38:27 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:27.279616 | orchestrator | 2025-07-12 20:38:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:30.342448 | orchestrator | 2025-07-12 20:38:30 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:30.342545 | orchestrator | 2025-07-12 20:38:30 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:30.342575 | orchestrator | 2025-07-12 20:38:30 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:30.342658 | orchestrator | 2025-07-12 20:38:30 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:30.342675 | orchestrator | 2025-07-12 20:38:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:33.368323 | orchestrator | 2025-07-12 20:38:33 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:33.368832 | orchestrator | 2025-07-12 20:38:33 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:33.369925 | orchestrator | 2025-07-12 20:38:33 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:33.370698 | orchestrator | 2025-07-12 20:38:33 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:33.370732 | orchestrator | 2025-07-12 20:38:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:36.414323 | orchestrator | 2025-07-12 20:38:36 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:36.414711 | orchestrator | 2025-07-12 20:38:36 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:36.417136 | orchestrator | 2025-07-12 20:38:36 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:36.421128 | orchestrator | 2025-07-12 20:38:36 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:36.422833 | orchestrator | 2025-07-12 20:38:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:39.473158 | orchestrator | 2025-07-12 20:38:39 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:39.475067 | orchestrator | 2025-07-12 20:38:39 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:39.476770 | orchestrator | 2025-07-12 20:38:39 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:39.478954 | orchestrator | 2025-07-12 20:38:39 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:39.478988 | orchestrator | 2025-07-12 20:38:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:42.533823 | orchestrator | 2025-07-12 20:38:42 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:42.533933 | orchestrator | 2025-07-12 20:38:42 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:42.536154 | orchestrator | 2025-07-12 20:38:42 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:42.537951 | orchestrator | 2025-07-12 20:38:42 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:42.537984 | orchestrator | 2025-07-12 20:38:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:45.597942 | orchestrator | 2025-07-12 20:38:45 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:45.600467 | orchestrator | 2025-07-12 20:38:45 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:45.603806 | orchestrator | 2025-07-12 20:38:45 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:45.607259 | orchestrator | 2025-07-12 20:38:45 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:45.607343 | orchestrator | 2025-07-12 20:38:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:48.652899 | orchestrator | 2025-07-12 20:38:48 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:48.653497 | orchestrator | 2025-07-12 20:38:48 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:48.654272 | orchestrator | 2025-07-12 20:38:48 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:48.655380 | orchestrator | 2025-07-12 20:38:48 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:48.655413 | orchestrator | 2025-07-12 20:38:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:51.706495 | orchestrator | 2025-07-12 20:38:51 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:51.711103 | orchestrator | 2025-07-12 20:38:51 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state STARTED 2025-07-12 20:38:51.712608 | orchestrator | 2025-07-12 20:38:51 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:51.714771 | orchestrator | 2025-07-12 20:38:51 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:51.715337 | orchestrator | 2025-07-12 20:38:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:54.753056 | orchestrator | 2025-07-12 20:38:54 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:54.754434 | orchestrator | 2025-07-12 20:38:54 | INFO  | Task 2ffc1fb2-61b1-4e06-a792-7ceaa0e606c9 is in state SUCCESS 2025-07-12 20:38:54.756327 | orchestrator | 2025-07-12 20:38:54.756372 | orchestrator | 2025-07-12 20:38:54.756385 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:38:54.756397 | orchestrator | 2025-07-12 20:38:54.756408 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:38:54.756420 | orchestrator | Saturday 12 July 2025 20:37:52 +0000 (0:00:00.189) 0:00:00.189 ********* 2025-07-12 20:38:54.756431 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:38:54.756443 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:38:54.756454 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:38:54.756466 | orchestrator | 2025-07-12 20:38:54.756477 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:38:54.756489 | orchestrator | Saturday 12 July 2025 20:37:52 +0000 (0:00:00.363) 0:00:00.552 ********* 2025-07-12 20:38:54.756500 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-07-12 20:38:54.756512 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-07-12 20:38:54.756523 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-07-12 20:38:54.756533 | orchestrator | 2025-07-12 20:38:54.756544 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-07-12 20:38:54.756555 | orchestrator | 2025-07-12 20:38:54.756566 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-07-12 20:38:54.756605 | orchestrator | Saturday 12 July 2025 20:37:53 +0000 (0:00:00.948) 0:00:01.501 ********* 2025-07-12 20:38:54.756617 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:38:54.756628 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:38:54.756638 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:38:54.756649 | orchestrator | 2025-07-12 20:38:54.756660 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:38:54.756671 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:38:54.756685 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:38:54.756696 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:38:54.756707 | orchestrator | 2025-07-12 20:38:54.756717 | orchestrator | 2025-07-12 20:38:54.756728 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:38:54.756738 | orchestrator | Saturday 12 July 2025 20:37:54 +0000 (0:00:00.847) 0:00:02.349 ********* 2025-07-12 20:38:54.756749 | orchestrator | =============================================================================== 2025-07-12 20:38:54.756817 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.95s 2025-07-12 20:38:54.756831 | orchestrator | Waiting for Nova public port to be UP ----------------------------------- 0.85s 2025-07-12 20:38:54.756842 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.36s 2025-07-12 20:38:54.756853 | orchestrator | 2025-07-12 20:38:54.756863 | orchestrator | 2025-07-12 20:38:54.756875 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:38:54.756885 | orchestrator | 2025-07-12 20:38:54.756896 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:38:54.756907 | orchestrator | Saturday 12 July 2025 20:37:03 +0000 (0:00:00.268) 0:00:00.268 ********* 2025-07-12 20:38:54.756918 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:38:54.756930 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:38:54.756943 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:38:54.756955 | orchestrator | 2025-07-12 20:38:54.756967 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:38:54.756980 | orchestrator | Saturday 12 July 2025 20:37:03 +0000 (0:00:00.308) 0:00:00.577 ********* 2025-07-12 20:38:54.756992 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-07-12 20:38:54.757024 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-07-12 20:38:54.757037 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-07-12 20:38:54.757049 | orchestrator | 2025-07-12 20:38:54.757062 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-07-12 20:38:54.757074 | orchestrator | 2025-07-12 20:38:54.757086 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 20:38:54.757099 | orchestrator | Saturday 12 July 2025 20:37:03 +0000 (0:00:00.434) 0:00:01.012 ********* 2025-07-12 20:38:54.757111 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:38:54.757124 | orchestrator | 2025-07-12 20:38:54.757136 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-07-12 20:38:54.757148 | orchestrator | Saturday 12 July 2025 20:37:04 +0000 (0:00:00.576) 0:00:01.588 ********* 2025-07-12 20:38:54.757161 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-07-12 20:38:54.757173 | orchestrator | 2025-07-12 20:38:54.757186 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-07-12 20:38:54.757198 | orchestrator | Saturday 12 July 2025 20:37:07 +0000 (0:00:03.106) 0:00:04.695 ********* 2025-07-12 20:38:54.757210 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-07-12 20:38:54.757233 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-07-12 20:38:54.757245 | orchestrator | 2025-07-12 20:38:54.757273 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-07-12 20:38:54.757288 | orchestrator | Saturday 12 July 2025 20:37:14 +0000 (0:00:06.437) 0:00:11.132 ********* 2025-07-12 20:38:54.757301 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:38:54.757312 | orchestrator | 2025-07-12 20:38:54.757323 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-07-12 20:38:54.757334 | orchestrator | Saturday 12 July 2025 20:37:17 +0000 (0:00:03.216) 0:00:14.349 ********* 2025-07-12 20:38:54.757360 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:38:54.757371 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-07-12 20:38:54.757382 | orchestrator | 2025-07-12 20:38:54.757393 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-07-12 20:38:54.757403 | orchestrator | Saturday 12 July 2025 20:37:21 +0000 (0:00:03.799) 0:00:18.149 ********* 2025-07-12 20:38:54.757414 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:38:54.757425 | orchestrator | 2025-07-12 20:38:54.757435 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-07-12 20:38:54.757446 | orchestrator | Saturday 12 July 2025 20:37:24 +0000 (0:00:03.219) 0:00:21.368 ********* 2025-07-12 20:38:54.757457 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-07-12 20:38:54.757467 | orchestrator | 2025-07-12 20:38:54.757478 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-07-12 20:38:54.757489 | orchestrator | Saturday 12 July 2025 20:37:28 +0000 (0:00:03.997) 0:00:25.366 ********* 2025-07-12 20:38:54.757499 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:38:54.757510 | orchestrator | 2025-07-12 20:38:54.757521 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-07-12 20:38:54.757531 | orchestrator | Saturday 12 July 2025 20:37:31 +0000 (0:00:03.174) 0:00:28.540 ********* 2025-07-12 20:38:54.757542 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:38:54.757552 | orchestrator | 2025-07-12 20:38:54.757563 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-07-12 20:38:54.757573 | orchestrator | Saturday 12 July 2025 20:37:35 +0000 (0:00:03.575) 0:00:32.116 ********* 2025-07-12 20:38:54.757584 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:38:54.757594 | orchestrator | 2025-07-12 20:38:54.757605 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-07-12 20:38:54.757615 | orchestrator | Saturday 12 July 2025 20:37:38 +0000 (0:00:03.337) 0:00:35.454 ********* 2025-07-12 20:38:54.757630 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.757647 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.757675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.757732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.757747 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.757759 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.757770 | orchestrator | 2025-07-12 20:38:54.757781 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-07-12 20:38:54.757819 | orchestrator | Saturday 12 July 2025 20:37:39 +0000 (0:00:01.267) 0:00:36.722 ********* 2025-07-12 20:38:54.757839 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:38:54.757850 | orchestrator | 2025-07-12 20:38:54.757861 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-07-12 20:38:54.757872 | orchestrator | Saturday 12 July 2025 20:37:39 +0000 (0:00:00.125) 0:00:36.847 ********* 2025-07-12 20:38:54.757883 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:38:54.757894 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:38:54.757904 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:38:54.757915 | orchestrator | 2025-07-12 20:38:54.757925 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-07-12 20:38:54.757936 | orchestrator | Saturday 12 July 2025 20:37:40 +0000 (0:00:00.588) 0:00:37.436 ********* 2025-07-12 20:38:54.757947 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:38:54.757957 | orchestrator | 2025-07-12 20:38:54.757968 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-07-12 20:38:54.757979 | orchestrator | Saturday 12 July 2025 20:37:41 +0000 (0:00:00.911) 0:00:38.347 ********* 2025-07-12 20:38:54.757991 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.758099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.758118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.758167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.758189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.758205 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.758217 | orchestrator | 2025-07-12 20:38:54.758229 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-07-12 20:38:54.758247 | orchestrator | Saturday 12 July 2025 20:37:43 +0000 (0:00:02.255) 0:00:40.603 ********* 2025-07-12 20:38:54.758259 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:38:54.758270 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:38:54.758281 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:38:54.758292 | orchestrator | 2025-07-12 20:38:54.758303 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 20:38:54.758314 | orchestrator | Saturday 12 July 2025 20:37:43 +0000 (0:00:00.375) 0:00:40.978 ********* 2025-07-12 20:38:54.758326 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:38:54.758337 | orchestrator | 2025-07-12 20:38:54.758347 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-07-12 20:38:54.758358 | orchestrator | Saturday 12 July 2025 20:37:44 +0000 (0:00:00.771) 0:00:41.750 ********* 2025-07-12 20:38:54.758369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.758390 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.758403 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.758420 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.758439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.758451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.758470 | orchestrator | 2025-07-12 20:38:54.758481 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-07-12 20:38:54.758492 | orchestrator | Saturday 12 July 2025 20:37:46 +0000 (0:00:02.294) 0:00:44.045 ********* 2025-07-12 20:38:54.758504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:38:54.758516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:38:54.758527 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:38:54.758550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:38:54.758562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:38:54.758574 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:38:54.758592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:38:54.758604 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:38:54.758615 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:38:54.758626 | orchestrator | 2025-07-12 20:38:54.758637 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-07-12 20:38:54.758648 | orchestrator | Saturday 12 July 2025 20:37:47 +0000 (0:00:00.710) 0:00:44.755 ********* 2025-07-12 20:38:54.758670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:38:54.758690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:38:54.758702 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:38:54.758713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:38:54.758738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:38:54.758749 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:38:54.758760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:38:54.758777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:38:54.758788 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:38:54.758799 | orchestrator | 2025-07-12 20:38:54.758810 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-07-12 20:38:54.758821 | orchestrator | Saturday 12 July 2025 20:37:48 +0000 (0:00:01.308) 0:00:46.064 ********* 2025-07-12 20:38:54.758841 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.758861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.758873 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.758885 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.758907 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.758919 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.758938 | orchestrator | 2025-07-12 20:38:54.758949 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-07-12 20:38:54.758960 | orchestrator | Saturday 12 July 2025 20:37:51 +0000 (0:00:02.494) 0:00:48.558 ********* 2025-07-12 20:38:54.758972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.758983 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.759000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.759077 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.759098 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.759110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.759122 | orchestrator | 2025-07-12 20:38:54.759132 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-07-12 20:38:54.759144 | orchestrator | Saturday 12 July 2025 20:37:57 +0000 (0:00:05.854) 0:00:54.412 ********* 2025-07-12 20:38:54.759156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:38:54.759172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:38:54.759183 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:38:54.759207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:38:54.759234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:38:54.759245 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:38:54.759255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-07-12 20:38:54.759266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:38:54.759275 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:38:54.759285 | orchestrator | 2025-07-12 20:38:54.759295 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-07-12 20:38:54.759304 | orchestrator | Saturday 12 July 2025 20:37:58 +0000 (0:00:00.955) 0:00:55.367 ********* 2025-07-12 20:38:54.759325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.759344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.759355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-07-12 20:38:54.759365 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.759375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.759396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:38:54.759414 | orchestrator | 2025-07-12 20:38:54.759424 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-07-12 20:38:54.759434 | orchestrator | Saturday 12 July 2025 20:38:00 +0000 (0:00:02.359) 0:00:57.726 ********* 2025-07-12 20:38:54.759444 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:38:54.759453 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:38:54.759463 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:38:54.759472 | orchestrator | 2025-07-12 20:38:54.759482 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-07-12 20:38:54.759491 | orchestrator | Saturday 12 July 2025 20:38:00 +0000 (0:00:00.313) 0:00:58.040 ********* 2025-07-12 20:38:54.759501 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:38:54.759511 | orchestrator | 2025-07-12 20:38:54.759520 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-07-12 20:38:54.759530 | orchestrator | Saturday 12 July 2025 20:38:03 +0000 (0:00:02.191) 0:01:00.232 ********* 2025-07-12 20:38:54.759540 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:38:54.759549 | orchestrator | 2025-07-12 20:38:54.759558 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-07-12 20:38:54.759568 | orchestrator | Saturday 12 July 2025 20:38:05 +0000 (0:00:02.012) 0:01:02.244 ********* 2025-07-12 20:38:54.759577 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:38:54.759587 | orchestrator | 2025-07-12 20:38:54.759596 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 20:38:54.759606 | orchestrator | Saturday 12 July 2025 20:38:20 +0000 (0:00:15.746) 0:01:17.990 ********* 2025-07-12 20:38:54.759615 | orchestrator | 2025-07-12 20:38:54.759625 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 20:38:54.759634 | orchestrator | Saturday 12 July 2025 20:38:20 +0000 (0:00:00.061) 0:01:18.051 ********* 2025-07-12 20:38:54.759643 | orchestrator | 2025-07-12 20:38:54.759653 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-07-12 20:38:54.759662 | orchestrator | Saturday 12 July 2025 20:38:21 +0000 (0:00:00.070) 0:01:18.122 ********* 2025-07-12 20:38:54.759672 | orchestrator | 2025-07-12 20:38:54.759682 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-07-12 20:38:54.759691 | orchestrator | Saturday 12 July 2025 20:38:21 +0000 (0:00:00.068) 0:01:18.191 ********* 2025-07-12 20:38:54.759701 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:38:54.759710 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:38:54.759720 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:38:54.759730 | orchestrator | 2025-07-12 20:38:54.759739 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-07-12 20:38:54.759749 | orchestrator | Saturday 12 July 2025 20:38:41 +0000 (0:00:20.029) 0:01:38.221 ********* 2025-07-12 20:38:54.759758 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:38:54.759768 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:38:54.759777 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:38:54.759787 | orchestrator | 2025-07-12 20:38:54.759796 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:38:54.759806 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-07-12 20:38:54.759817 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:38:54.759834 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:38:54.759844 | orchestrator | 2025-07-12 20:38:54.759854 | orchestrator | 2025-07-12 20:38:54.759864 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:38:54.759874 | orchestrator | Saturday 12 July 2025 20:38:52 +0000 (0:00:10.920) 0:01:49.142 ********* 2025-07-12 20:38:54.759884 | orchestrator | =============================================================================== 2025-07-12 20:38:54.759893 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.03s 2025-07-12 20:38:54.759903 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.75s 2025-07-12 20:38:54.759912 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 10.92s 2025-07-12 20:38:54.759923 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.44s 2025-07-12 20:38:54.759940 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.85s 2025-07-12 20:38:54.759954 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.00s 2025-07-12 20:38:54.759968 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.80s 2025-07-12 20:38:54.759977 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.58s 2025-07-12 20:38:54.759987 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.34s 2025-07-12 20:38:54.759996 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.22s 2025-07-12 20:38:54.760025 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.22s 2025-07-12 20:38:54.760044 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.17s 2025-07-12 20:38:54.760054 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.11s 2025-07-12 20:38:54.760063 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.49s 2025-07-12 20:38:54.760073 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.36s 2025-07-12 20:38:54.760083 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.29s 2025-07-12 20:38:54.760099 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.26s 2025-07-12 20:38:54.760110 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.19s 2025-07-12 20:38:54.760119 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.01s 2025-07-12 20:38:54.760129 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 1.31s 2025-07-12 20:38:54.760139 | orchestrator | 2025-07-12 20:38:54 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:54.760149 | orchestrator | 2025-07-12 20:38:54 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:54.760158 | orchestrator | 2025-07-12 20:38:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:38:57.797163 | orchestrator | 2025-07-12 20:38:57 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:38:57.797556 | orchestrator | 2025-07-12 20:38:57 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:38:57.801221 | orchestrator | 2025-07-12 20:38:57 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:38:57.801280 | orchestrator | 2025-07-12 20:38:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:00.837505 | orchestrator | 2025-07-12 20:39:00 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:00.841197 | orchestrator | 2025-07-12 20:39:00 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:00.842325 | orchestrator | 2025-07-12 20:39:00 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:00.842369 | orchestrator | 2025-07-12 20:39:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:03.875764 | orchestrator | 2025-07-12 20:39:03 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:03.875854 | orchestrator | 2025-07-12 20:39:03 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:03.876733 | orchestrator | 2025-07-12 20:39:03 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:03.876754 | orchestrator | 2025-07-12 20:39:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:06.924193 | orchestrator | 2025-07-12 20:39:06 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:06.926736 | orchestrator | 2025-07-12 20:39:06 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:06.928998 | orchestrator | 2025-07-12 20:39:06 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:06.929337 | orchestrator | 2025-07-12 20:39:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:09.974910 | orchestrator | 2025-07-12 20:39:09 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:09.977922 | orchestrator | 2025-07-12 20:39:09 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:09.980135 | orchestrator | 2025-07-12 20:39:09 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:09.980186 | orchestrator | 2025-07-12 20:39:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:13.021575 | orchestrator | 2025-07-12 20:39:13 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:13.022359 | orchestrator | 2025-07-12 20:39:13 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:13.023631 | orchestrator | 2025-07-12 20:39:13 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:13.023666 | orchestrator | 2025-07-12 20:39:13 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:16.063781 | orchestrator | 2025-07-12 20:39:16 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:16.065788 | orchestrator | 2025-07-12 20:39:16 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:16.068486 | orchestrator | 2025-07-12 20:39:16 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:16.068558 | orchestrator | 2025-07-12 20:39:16 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:19.113777 | orchestrator | 2025-07-12 20:39:19 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:19.116811 | orchestrator | 2025-07-12 20:39:19 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:19.119507 | orchestrator | 2025-07-12 20:39:19 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:19.119590 | orchestrator | 2025-07-12 20:39:19 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:22.176813 | orchestrator | 2025-07-12 20:39:22 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:22.179483 | orchestrator | 2025-07-12 20:39:22 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:22.182977 | orchestrator | 2025-07-12 20:39:22 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:22.183194 | orchestrator | 2025-07-12 20:39:22 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:25.230275 | orchestrator | 2025-07-12 20:39:25 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:25.233463 | orchestrator | 2025-07-12 20:39:25 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:25.234625 | orchestrator | 2025-07-12 20:39:25 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:25.234746 | orchestrator | 2025-07-12 20:39:25 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:28.278237 | orchestrator | 2025-07-12 20:39:28 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:28.281886 | orchestrator | 2025-07-12 20:39:28 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:28.284337 | orchestrator | 2025-07-12 20:39:28 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:28.285003 | orchestrator | 2025-07-12 20:39:28 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:31.335610 | orchestrator | 2025-07-12 20:39:31 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:31.336924 | orchestrator | 2025-07-12 20:39:31 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:31.339360 | orchestrator | 2025-07-12 20:39:31 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:31.339486 | orchestrator | 2025-07-12 20:39:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:34.387901 | orchestrator | 2025-07-12 20:39:34 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:34.389344 | orchestrator | 2025-07-12 20:39:34 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:34.391966 | orchestrator | 2025-07-12 20:39:34 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:34.392183 | orchestrator | 2025-07-12 20:39:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:37.445102 | orchestrator | 2025-07-12 20:39:37 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state STARTED 2025-07-12 20:39:37.447723 | orchestrator | 2025-07-12 20:39:37 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:37.456471 | orchestrator | 2025-07-12 20:39:37 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:37.456561 | orchestrator | 2025-07-12 20:39:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:40.507618 | orchestrator | 2025-07-12 20:39:40 | INFO  | Task cb318024-99ec-4032-96d6-af1ae4fc13d5 is in state SUCCESS 2025-07-12 20:39:40.509661 | orchestrator | 2025-07-12 20:39:40.509685 | orchestrator | 2025-07-12 20:39:40.509690 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:39:40.509695 | orchestrator | 2025-07-12 20:39:40.509700 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-07-12 20:39:40.509705 | orchestrator | Saturday 12 July 2025 20:29:48 +0000 (0:00:00.473) 0:00:00.474 ********* 2025-07-12 20:39:40.509709 | orchestrator | changed: [testbed-manager] 2025-07-12 20:39:40.509714 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.509719 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:39:40.509723 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:39:40.509727 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.509731 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.509752 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.509756 | orchestrator | 2025-07-12 20:39:40.509760 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:39:40.509764 | orchestrator | Saturday 12 July 2025 20:29:49 +0000 (0:00:01.546) 0:00:02.020 ********* 2025-07-12 20:39:40.509779 | orchestrator | changed: [testbed-manager] 2025-07-12 20:39:40.509783 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.509787 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:39:40.509791 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:39:40.509794 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.509798 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.509802 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.509806 | orchestrator | 2025-07-12 20:39:40.509810 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:39:40.509814 | orchestrator | Saturday 12 July 2025 20:29:50 +0000 (0:00:00.904) 0:00:02.924 ********* 2025-07-12 20:39:40.509818 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-07-12 20:39:40.509823 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-07-12 20:39:40.509827 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-07-12 20:39:40.509831 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-07-12 20:39:40.509834 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-07-12 20:39:40.509838 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-07-12 20:39:40.509842 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-07-12 20:39:40.509846 | orchestrator | 2025-07-12 20:39:40.509851 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-07-12 20:39:40.509854 | orchestrator | 2025-07-12 20:39:40.509858 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-12 20:39:40.509862 | orchestrator | Saturday 12 July 2025 20:29:52 +0000 (0:00:01.212) 0:00:04.137 ********* 2025-07-12 20:39:40.509866 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:39:40.509870 | orchestrator | 2025-07-12 20:39:40.509874 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-07-12 20:39:40.509878 | orchestrator | Saturday 12 July 2025 20:29:53 +0000 (0:00:01.179) 0:00:05.316 ********* 2025-07-12 20:39:40.509882 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-07-12 20:39:40.509887 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-07-12 20:39:40.509891 | orchestrator | 2025-07-12 20:39:40.509895 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-07-12 20:39:40.509899 | orchestrator | Saturday 12 July 2025 20:29:57 +0000 (0:00:04.056) 0:00:09.373 ********* 2025-07-12 20:39:40.509903 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:39:40.509907 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-07-12 20:39:40.509910 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.509914 | orchestrator | 2025-07-12 20:39:40.509918 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-12 20:39:40.509922 | orchestrator | Saturday 12 July 2025 20:30:01 +0000 (0:00:04.109) 0:00:13.482 ********* 2025-07-12 20:39:40.509926 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.509930 | orchestrator | 2025-07-12 20:39:40.509934 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-07-12 20:39:40.509938 | orchestrator | Saturday 12 July 2025 20:30:02 +0000 (0:00:01.025) 0:00:14.507 ********* 2025-07-12 20:39:40.509941 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.509945 | orchestrator | 2025-07-12 20:39:40.509949 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-07-12 20:39:40.509953 | orchestrator | Saturday 12 July 2025 20:30:04 +0000 (0:00:02.344) 0:00:16.852 ********* 2025-07-12 20:39:40.509957 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.509961 | orchestrator | 2025-07-12 20:39:40.509969 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 20:39:40.509973 | orchestrator | Saturday 12 July 2025 20:30:08 +0000 (0:00:03.331) 0:00:20.183 ********* 2025-07-12 20:39:40.509976 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.509981 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.509985 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.509989 | orchestrator | 2025-07-12 20:39:40.509993 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-12 20:39:40.509997 | orchestrator | Saturday 12 July 2025 20:30:08 +0000 (0:00:00.796) 0:00:20.980 ********* 2025-07-12 20:39:40.510001 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:40.510005 | orchestrator | 2025-07-12 20:39:40.510009 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-07-12 20:39:40.510071 | orchestrator | Saturday 12 July 2025 20:30:37 +0000 (0:00:28.987) 0:00:49.967 ********* 2025-07-12 20:39:40.510077 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.510081 | orchestrator | 2025-07-12 20:39:40.510084 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 20:39:40.510088 | orchestrator | Saturday 12 July 2025 20:30:51 +0000 (0:00:13.188) 0:01:03.156 ********* 2025-07-12 20:39:40.510092 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:40.510095 | orchestrator | 2025-07-12 20:39:40.510099 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 20:39:40.510122 | orchestrator | Saturday 12 July 2025 20:31:02 +0000 (0:00:11.002) 0:01:14.158 ********* 2025-07-12 20:39:40.510135 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:40.510139 | orchestrator | 2025-07-12 20:39:40.510143 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-07-12 20:39:40.510147 | orchestrator | Saturday 12 July 2025 20:31:03 +0000 (0:00:01.666) 0:01:15.825 ********* 2025-07-12 20:39:40.510150 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.510154 | orchestrator | 2025-07-12 20:39:40.510158 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 20:39:40.510162 | orchestrator | Saturday 12 July 2025 20:31:04 +0000 (0:00:00.773) 0:01:16.598 ********* 2025-07-12 20:39:40.510165 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:39:40.510169 | orchestrator | 2025-07-12 20:39:40.510173 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-07-12 20:39:40.510176 | orchestrator | Saturday 12 July 2025 20:31:05 +0000 (0:00:00.998) 0:01:17.596 ********* 2025-07-12 20:39:40.510184 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:40.510188 | orchestrator | 2025-07-12 20:39:40.510192 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-12 20:39:40.510195 | orchestrator | Saturday 12 July 2025 20:31:22 +0000 (0:00:16.558) 0:01:34.155 ********* 2025-07-12 20:39:40.510199 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.510203 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510207 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510210 | orchestrator | 2025-07-12 20:39:40.510214 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-07-12 20:39:40.510218 | orchestrator | 2025-07-12 20:39:40.510221 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-07-12 20:39:40.510225 | orchestrator | Saturday 12 July 2025 20:31:22 +0000 (0:00:00.737) 0:01:34.893 ********* 2025-07-12 20:39:40.510229 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:39:40.510232 | orchestrator | 2025-07-12 20:39:40.510236 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-07-12 20:39:40.510240 | orchestrator | Saturday 12 July 2025 20:31:23 +0000 (0:00:01.004) 0:01:35.897 ********* 2025-07-12 20:39:40.510244 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510247 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510251 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.510259 | orchestrator | 2025-07-12 20:39:40.510262 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-07-12 20:39:40.510266 | orchestrator | Saturday 12 July 2025 20:31:25 +0000 (0:00:01.952) 0:01:37.850 ********* 2025-07-12 20:39:40.510270 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510274 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510277 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.510281 | orchestrator | 2025-07-12 20:39:40.510285 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-12 20:39:40.510289 | orchestrator | Saturday 12 July 2025 20:31:27 +0000 (0:00:01.772) 0:01:39.622 ********* 2025-07-12 20:39:40.510292 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.510296 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510300 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510303 | orchestrator | 2025-07-12 20:39:40.510307 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-12 20:39:40.510311 | orchestrator | Saturday 12 July 2025 20:31:27 +0000 (0:00:00.346) 0:01:39.969 ********* 2025-07-12 20:39:40.510315 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 20:39:40.510318 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510322 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 20:39:40.510326 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510329 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-07-12 20:39:40.510333 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-07-12 20:39:40.510337 | orchestrator | 2025-07-12 20:39:40.510340 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-07-12 20:39:40.510344 | orchestrator | Saturday 12 July 2025 20:31:34 +0000 (0:00:06.852) 0:01:46.821 ********* 2025-07-12 20:39:40.510348 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.510352 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510355 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510359 | orchestrator | 2025-07-12 20:39:40.510363 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-07-12 20:39:40.510366 | orchestrator | Saturday 12 July 2025 20:31:35 +0000 (0:00:00.445) 0:01:47.266 ********* 2025-07-12 20:39:40.510370 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-07-12 20:39:40.510374 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.510377 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-07-12 20:39:40.510381 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510385 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-07-12 20:39:40.510389 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510392 | orchestrator | 2025-07-12 20:39:40.510396 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-12 20:39:40.510400 | orchestrator | Saturday 12 July 2025 20:31:35 +0000 (0:00:00.754) 0:01:48.021 ********* 2025-07-12 20:39:40.510404 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510407 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.510411 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510415 | orchestrator | 2025-07-12 20:39:40.510418 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-07-12 20:39:40.510422 | orchestrator | Saturday 12 July 2025 20:31:36 +0000 (0:00:00.626) 0:01:48.648 ********* 2025-07-12 20:39:40.510426 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510430 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510433 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.510437 | orchestrator | 2025-07-12 20:39:40.510441 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-07-12 20:39:40.510444 | orchestrator | Saturday 12 July 2025 20:31:37 +0000 (0:00:00.885) 0:01:49.533 ********* 2025-07-12 20:39:40.510448 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510452 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510458 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.510465 | orchestrator | 2025-07-12 20:39:40.510469 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-07-12 20:39:40.510543 | orchestrator | Saturday 12 July 2025 20:31:39 +0000 (0:00:02.105) 0:01:51.639 ********* 2025-07-12 20:39:40.510548 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510551 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510555 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:40.510559 | orchestrator | 2025-07-12 20:39:40.510562 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 20:39:40.510566 | orchestrator | Saturday 12 July 2025 20:31:58 +0000 (0:00:18.926) 0:02:10.566 ********* 2025-07-12 20:39:40.510570 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510573 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510577 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:40.510581 | orchestrator | 2025-07-12 20:39:40.510585 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 20:39:40.510591 | orchestrator | Saturday 12 July 2025 20:32:12 +0000 (0:00:14.362) 0:02:24.928 ********* 2025-07-12 20:39:40.510595 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:40.510599 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510603 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510606 | orchestrator | 2025-07-12 20:39:40.510610 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-07-12 20:39:40.510614 | orchestrator | Saturday 12 July 2025 20:32:13 +0000 (0:00:00.929) 0:02:25.857 ********* 2025-07-12 20:39:40.510617 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510621 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510625 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.510628 | orchestrator | 2025-07-12 20:39:40.510632 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-07-12 20:39:40.510636 | orchestrator | Saturday 12 July 2025 20:32:25 +0000 (0:00:12.034) 0:02:37.892 ********* 2025-07-12 20:39:40.510639 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.510643 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510647 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510651 | orchestrator | 2025-07-12 20:39:40.510654 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-07-12 20:39:40.510658 | orchestrator | Saturday 12 July 2025 20:32:27 +0000 (0:00:01.747) 0:02:39.640 ********* 2025-07-12 20:39:40.510662 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.510665 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510669 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510673 | orchestrator | 2025-07-12 20:39:40.510676 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-07-12 20:39:40.510680 | orchestrator | 2025-07-12 20:39:40.510684 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 20:39:40.510688 | orchestrator | Saturday 12 July 2025 20:32:27 +0000 (0:00:00.331) 0:02:39.971 ********* 2025-07-12 20:39:40.510691 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:39:40.510696 | orchestrator | 2025-07-12 20:39:40.510700 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-07-12 20:39:40.510703 | orchestrator | Saturday 12 July 2025 20:32:28 +0000 (0:00:00.562) 0:02:40.534 ********* 2025-07-12 20:39:40.510707 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-07-12 20:39:40.510711 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-07-12 20:39:40.510715 | orchestrator | 2025-07-12 20:39:40.510719 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-07-12 20:39:40.510722 | orchestrator | Saturday 12 July 2025 20:32:31 +0000 (0:00:03.450) 0:02:43.985 ********* 2025-07-12 20:39:40.510726 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-07-12 20:39:40.510735 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-07-12 20:39:40.510739 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-07-12 20:39:40.510743 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-07-12 20:39:40.510746 | orchestrator | 2025-07-12 20:39:40.510750 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-07-12 20:39:40.510754 | orchestrator | Saturday 12 July 2025 20:32:38 +0000 (0:00:06.539) 0:02:50.524 ********* 2025-07-12 20:39:40.510758 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:39:40.510761 | orchestrator | 2025-07-12 20:39:40.510765 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-07-12 20:39:40.510769 | orchestrator | Saturday 12 July 2025 20:32:41 +0000 (0:00:03.382) 0:02:53.906 ********* 2025-07-12 20:39:40.510772 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:39:40.510776 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-07-12 20:39:40.510780 | orchestrator | 2025-07-12 20:39:40.510783 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-07-12 20:39:40.510787 | orchestrator | Saturday 12 July 2025 20:32:45 +0000 (0:00:03.309) 0:02:57.215 ********* 2025-07-12 20:39:40.510791 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:39:40.510795 | orchestrator | 2025-07-12 20:39:40.510798 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-07-12 20:39:40.510802 | orchestrator | Saturday 12 July 2025 20:32:47 +0000 (0:00:02.869) 0:03:00.085 ********* 2025-07-12 20:39:40.510806 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-07-12 20:39:40.510810 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-07-12 20:39:40.510813 | orchestrator | 2025-07-12 20:39:40.510817 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-07-12 20:39:40.510823 | orchestrator | Saturday 12 July 2025 20:32:55 +0000 (0:00:07.285) 0:03:07.371 ********* 2025-07-12 20:39:40.510834 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.510840 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.510854 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.510864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.510872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.510877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.510880 | orchestrator | 2025-07-12 20:39:40.510884 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-07-12 20:39:40.510888 | orchestrator | Saturday 12 July 2025 20:32:56 +0000 (0:00:01.337) 0:03:08.708 ********* 2025-07-12 20:39:40.510895 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.510899 | orchestrator | 2025-07-12 20:39:40.510902 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-07-12 20:39:40.510906 | orchestrator | Saturday 12 July 2025 20:32:56 +0000 (0:00:00.132) 0:03:08.841 ********* 2025-07-12 20:39:40.510910 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.510914 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510917 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510921 | orchestrator | 2025-07-12 20:39:40.510925 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-07-12 20:39:40.510928 | orchestrator | Saturday 12 July 2025 20:32:57 +0000 (0:00:00.534) 0:03:09.376 ********* 2025-07-12 20:39:40.510932 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:39:40.510936 | orchestrator | 2025-07-12 20:39:40.510939 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-07-12 20:39:40.510943 | orchestrator | Saturday 12 July 2025 20:32:57 +0000 (0:00:00.664) 0:03:10.041 ********* 2025-07-12 20:39:40.510947 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.510950 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.510954 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.510958 | orchestrator | 2025-07-12 20:39:40.510962 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-07-12 20:39:40.510965 | orchestrator | Saturday 12 July 2025 20:32:58 +0000 (0:00:00.327) 0:03:10.368 ********* 2025-07-12 20:39:40.510969 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:39:40.510973 | orchestrator | 2025-07-12 20:39:40.510976 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-12 20:39:40.510980 | orchestrator | Saturday 12 July 2025 20:32:58 +0000 (0:00:00.743) 0:03:11.112 ********* 2025-07-12 20:39:40.510987 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.510994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511020 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511024 | orchestrator | 2025-07-12 20:39:40.511046 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-12 20:39:40.511051 | orchestrator | Saturday 12 July 2025 20:33:01 +0000 (0:00:02.408) 0:03:13.520 ********* 2025-07-12 20:39:40.511058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:39:40.511067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.511071 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.511075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:39:40.511082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.511086 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.511093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:39:40.511101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.511105 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.511109 | orchestrator | 2025-07-12 20:39:40.511112 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-12 20:39:40.511116 | orchestrator | Saturday 12 July 2025 20:33:01 +0000 (0:00:00.567) 0:03:14.088 ********* 2025-07-12 20:39:40.511120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:39:40.511124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.511128 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.511241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:39:40.511252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.511256 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.511260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:39:40.511264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.511268 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.511272 | orchestrator | 2025-07-12 20:39:40.511276 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-07-12 20:39:40.511279 | orchestrator | Saturday 12 July 2025 20:33:03 +0000 (0:00:01.527) 0:03:15.615 ********* 2025-07-12 20:39:40.511289 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511299 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511310 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511328 | orchestrator | 2025-07-12 20:39:40.511332 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-07-12 20:39:40.511336 | orchestrator | Saturday 12 July 2025 20:33:06 +0000 (0:00:03.018) 0:03:18.637 ********* 2025-07-12 20:39:40.511340 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511359 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511363 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511372 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511375 | orchestrator | 2025-07-12 20:39:40.511379 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-07-12 20:39:40.511383 | orchestrator | Saturday 12 July 2025 20:33:15 +0000 (0:00:09.144) 0:03:27.782 ********* 2025-07-12 20:39:40.511389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:39:40.511399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.511403 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.511407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:39:40.511411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.511415 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.511419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-07-12 20:39:40.511431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.511435 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.511439 | orchestrator | 2025-07-12 20:39:40.511443 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-07-12 20:39:40.511446 | orchestrator | Saturday 12 July 2025 20:33:16 +0000 (0:00:01.093) 0:03:28.876 ********* 2025-07-12 20:39:40.511450 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.511454 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:39:40.511460 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:39:40.511464 | orchestrator | 2025-07-12 20:39:40.511467 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-07-12 20:39:40.511471 | orchestrator | Saturday 12 July 2025 20:33:18 +0000 (0:00:02.023) 0:03:30.899 ********* 2025-07-12 20:39:40.511475 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.511478 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.511482 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.511486 | orchestrator | 2025-07-12 20:39:40.511489 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-07-12 20:39:40.511493 | orchestrator | Saturday 12 July 2025 20:33:19 +0000 (0:00:00.368) 0:03:31.268 ********* 2025-07-12 20:39:40.511497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:30.0.1.20250711', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-07-12 20:39:40.511520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511528 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.511532 | orchestrator | 2025-07-12 20:39:40.511535 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 20:39:40.511542 | orchestrator | Saturday 12 July 2025 20:33:21 +0000 (0:00:01.880) 0:03:33.150 ********* 2025-07-12 20:39:40.511546 | orchestrator | 2025-07-12 20:39:40.511550 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 20:39:40.511554 | orchestrator | Saturday 12 July 2025 20:33:21 +0000 (0:00:00.330) 0:03:33.481 ********* 2025-07-12 20:39:40.511557 | orchestrator | 2025-07-12 20:39:40.511561 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-07-12 20:39:40.511565 | orchestrator | Saturday 12 July 2025 20:33:21 +0000 (0:00:00.361) 0:03:33.842 ********* 2025-07-12 20:39:40.511568 | orchestrator | 2025-07-12 20:39:40.511572 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-07-12 20:39:40.511576 | orchestrator | Saturday 12 July 2025 20:33:22 +0000 (0:00:00.575) 0:03:34.418 ********* 2025-07-12 20:39:40.511579 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.511583 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:39:40.511587 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:39:40.511590 | orchestrator | 2025-07-12 20:39:40.511594 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-07-12 20:39:40.511598 | orchestrator | Saturday 12 July 2025 20:33:47 +0000 (0:00:25.091) 0:03:59.510 ********* 2025-07-12 20:39:40.511601 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.511605 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:39:40.511609 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:39:40.511612 | orchestrator | 2025-07-12 20:39:40.511616 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-07-12 20:39:40.511620 | orchestrator | 2025-07-12 20:39:40.511623 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 20:39:40.511627 | orchestrator | Saturday 12 July 2025 20:34:01 +0000 (0:00:14.515) 0:04:14.026 ********* 2025-07-12 20:39:40.511631 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:39:40.511635 | orchestrator | 2025-07-12 20:39:40.511641 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 20:39:40.511644 | orchestrator | Saturday 12 July 2025 20:34:04 +0000 (0:00:02.807) 0:04:16.833 ********* 2025-07-12 20:39:40.511648 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.511652 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.511655 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.511659 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.511663 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.511666 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.511670 | orchestrator | 2025-07-12 20:39:40.511674 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-07-12 20:39:40.511677 | orchestrator | Saturday 12 July 2025 20:34:06 +0000 (0:00:01.977) 0:04:18.811 ********* 2025-07-12 20:39:40.511681 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.511685 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.511688 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.511694 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:39:40.511772 | orchestrator | 2025-07-12 20:39:40.511777 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-07-12 20:39:40.511781 | orchestrator | Saturday 12 July 2025 20:34:07 +0000 (0:00:01.261) 0:04:20.072 ********* 2025-07-12 20:39:40.511785 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-07-12 20:39:40.511788 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-07-12 20:39:40.511792 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-07-12 20:39:40.511796 | orchestrator | 2025-07-12 20:39:40.511800 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-07-12 20:39:40.511803 | orchestrator | Saturday 12 July 2025 20:34:08 +0000 (0:00:01.054) 0:04:21.127 ********* 2025-07-12 20:39:40.511807 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-07-12 20:39:40.511814 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-07-12 20:39:40.511818 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-07-12 20:39:40.511821 | orchestrator | 2025-07-12 20:39:40.511825 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-07-12 20:39:40.511829 | orchestrator | Saturday 12 July 2025 20:34:10 +0000 (0:00:01.613) 0:04:22.741 ********* 2025-07-12 20:39:40.511833 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-07-12 20:39:40.511836 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.512186 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-07-12 20:39:40.512199 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.512204 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-07-12 20:39:40.512208 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.512213 | orchestrator | 2025-07-12 20:39:40.512217 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-07-12 20:39:40.512222 | orchestrator | Saturday 12 July 2025 20:34:11 +0000 (0:00:00.849) 0:04:23.591 ********* 2025-07-12 20:39:40.512227 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 20:39:40.512231 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 20:39:40.512236 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:39:40.512240 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:39:40.512244 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-07-12 20:39:40.512248 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 20:39:40.512252 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 20:39:40.512255 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.512259 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-07-12 20:39:40.512263 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:39:40.512266 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:39:40.512270 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.512274 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-07-12 20:39:40.512277 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-07-12 20:39:40.512281 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.512285 | orchestrator | 2025-07-12 20:39:40.512288 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-07-12 20:39:40.512292 | orchestrator | Saturday 12 July 2025 20:34:13 +0000 (0:00:01.954) 0:04:25.545 ********* 2025-07-12 20:39:40.512296 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.512299 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.512303 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.512307 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.512310 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.512314 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.512318 | orchestrator | 2025-07-12 20:39:40.512321 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-07-12 20:39:40.512325 | orchestrator | Saturday 12 July 2025 20:34:16 +0000 (0:00:02.660) 0:04:28.206 ********* 2025-07-12 20:39:40.512329 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.512332 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.512336 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.512340 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.512343 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.512347 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.512351 | orchestrator | 2025-07-12 20:39:40.512361 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-07-12 20:39:40.512365 | orchestrator | Saturday 12 July 2025 20:34:18 +0000 (0:00:02.093) 0:04:30.300 ********* 2025-07-12 20:39:40.512374 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512389 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512394 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512413 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512421 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512487 | orchestrator | 2025-07-12 20:39:40.512491 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 20:39:40.512495 | orchestrator | Saturday 12 July 2025 20:34:23 +0000 (0:00:05.391) 0:04:35.691 ********* 2025-07-12 20:39:40.512499 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:39:40.512503 | orchestrator | 2025-07-12 20:39:40.512507 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-07-12 20:39:40.512511 | orchestrator | Saturday 12 July 2025 20:34:27 +0000 (0:00:03.500) 0:04:39.192 ********* 2025-07-12 20:39:40.512515 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512522 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512543 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512547 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512562 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512573 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512580 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512596 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.512606 | orchestrator | 2025-07-12 20:39:40.512610 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-07-12 20:39:40.512614 | orchestrator | Saturday 12 July 2025 20:34:33 +0000 (0:00:06.794) 0:04:45.986 ********* 2025-07-12 20:39:40.512622 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.512627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.512634 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512638 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.512642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.512649 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.512655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512659 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.512663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.512670 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.512674 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512678 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.512682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:39:40.512688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512692 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.512698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:39:40.512703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512710 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.512714 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:39:40.512718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512721 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.512725 | orchestrator | 2025-07-12 20:39:40.512729 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-07-12 20:39:40.512733 | orchestrator | Saturday 12 July 2025 20:34:39 +0000 (0:00:06.082) 0:04:52.068 ********* 2025-07-12 20:39:40.512737 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.512743 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.512751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512758 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.512762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.512766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.512770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512774 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.512781 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.512825 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.512834 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512838 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.512842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:39:40.512846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512850 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.512854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:39:40.512861 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:39:40.512867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512874 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.512878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.512882 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.512886 | orchestrator | 2025-07-12 20:39:40.512889 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 20:39:40.512893 | orchestrator | Saturday 12 July 2025 20:34:44 +0000 (0:00:04.095) 0:04:56.164 ********* 2025-07-12 20:39:40.512897 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.512901 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.512904 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.512908 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-07-12 20:39:40.512912 | orchestrator | 2025-07-12 20:39:40.512915 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-07-12 20:39:40.512919 | orchestrator | Saturday 12 July 2025 20:34:46 +0000 (0:00:02.468) 0:04:58.633 ********* 2025-07-12 20:39:40.512923 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 20:39:40.512927 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 20:39:40.512930 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 20:39:40.512934 | orchestrator | 2025-07-12 20:39:40.512938 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-07-12 20:39:40.512941 | orchestrator | Saturday 12 July 2025 20:34:49 +0000 (0:00:02.607) 0:05:01.240 ********* 2025-07-12 20:39:40.512945 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 20:39:40.512949 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-07-12 20:39:40.512952 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-07-12 20:39:40.512956 | orchestrator | 2025-07-12 20:39:40.512960 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-07-12 20:39:40.512963 | orchestrator | Saturday 12 July 2025 20:34:52 +0000 (0:00:03.255) 0:05:04.496 ********* 2025-07-12 20:39:40.512967 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:39:40.512971 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:39:40.512975 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:39:40.512978 | orchestrator | 2025-07-12 20:39:40.512982 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-07-12 20:39:40.512986 | orchestrator | Saturday 12 July 2025 20:34:54 +0000 (0:00:01.648) 0:05:06.144 ********* 2025-07-12 20:39:40.512989 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:39:40.512993 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:39:40.512997 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:39:40.513000 | orchestrator | 2025-07-12 20:39:40.513004 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-07-12 20:39:40.513008 | orchestrator | Saturday 12 July 2025 20:34:55 +0000 (0:00:01.688) 0:05:07.833 ********* 2025-07-12 20:39:40.513012 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 20:39:40.513015 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 20:39:40.513019 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 20:39:40.513023 | orchestrator | 2025-07-12 20:39:40.513026 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-07-12 20:39:40.513062 | orchestrator | Saturday 12 July 2025 20:34:57 +0000 (0:00:02.077) 0:05:09.910 ********* 2025-07-12 20:39:40.513070 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 20:39:40.513074 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 20:39:40.513078 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 20:39:40.513081 | orchestrator | 2025-07-12 20:39:40.513085 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-07-12 20:39:40.513089 | orchestrator | Saturday 12 July 2025 20:34:59 +0000 (0:00:01.733) 0:05:11.644 ********* 2025-07-12 20:39:40.513095 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-07-12 20:39:40.513099 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-07-12 20:39:40.513102 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-07-12 20:39:40.513106 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-07-12 20:39:40.513110 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-07-12 20:39:40.513113 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-07-12 20:39:40.513117 | orchestrator | 2025-07-12 20:39:40.513121 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-07-12 20:39:40.513124 | orchestrator | Saturday 12 July 2025 20:35:06 +0000 (0:00:07.291) 0:05:18.935 ********* 2025-07-12 20:39:40.513128 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.513132 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.513135 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.513139 | orchestrator | 2025-07-12 20:39:40.513143 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-07-12 20:39:40.513146 | orchestrator | Saturday 12 July 2025 20:35:07 +0000 (0:00:00.560) 0:05:19.496 ********* 2025-07-12 20:39:40.513150 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.513154 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.513157 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.513161 | orchestrator | 2025-07-12 20:39:40.513165 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-07-12 20:39:40.513171 | orchestrator | Saturday 12 July 2025 20:35:08 +0000 (0:00:00.746) 0:05:20.242 ********* 2025-07-12 20:39:40.513175 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.513179 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.513182 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.513186 | orchestrator | 2025-07-12 20:39:40.513190 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-07-12 20:39:40.513193 | orchestrator | Saturday 12 July 2025 20:35:10 +0000 (0:00:02.775) 0:05:23.018 ********* 2025-07-12 20:39:40.513198 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 20:39:40.513202 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 20:39:40.513206 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-07-12 20:39:40.513210 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 20:39:40.513214 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 20:39:40.513218 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-07-12 20:39:40.513221 | orchestrator | 2025-07-12 20:39:40.513225 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-07-12 20:39:40.513229 | orchestrator | Saturday 12 July 2025 20:35:16 +0000 (0:00:05.888) 0:05:28.907 ********* 2025-07-12 20:39:40.513232 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:39:40.513239 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:39:40.513243 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:39:40.513246 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-07-12 20:39:40.513250 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.513254 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-07-12 20:39:40.513258 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.513261 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-07-12 20:39:40.513265 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.513269 | orchestrator | 2025-07-12 20:39:40.513272 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-07-12 20:39:40.513276 | orchestrator | Saturday 12 July 2025 20:35:21 +0000 (0:00:04.872) 0:05:33.780 ********* 2025-07-12 20:39:40.513280 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.513283 | orchestrator | 2025-07-12 20:39:40.513287 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-07-12 20:39:40.513291 | orchestrator | Saturday 12 July 2025 20:35:22 +0000 (0:00:00.357) 0:05:34.138 ********* 2025-07-12 20:39:40.513294 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.513298 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.513302 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.513305 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.513309 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.513313 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.513316 | orchestrator | 2025-07-12 20:39:40.513320 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-07-12 20:39:40.513324 | orchestrator | Saturday 12 July 2025 20:35:23 +0000 (0:00:01.504) 0:05:35.643 ********* 2025-07-12 20:39:40.513327 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-07-12 20:39:40.513331 | orchestrator | 2025-07-12 20:39:40.513335 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-07-12 20:39:40.513338 | orchestrator | Saturday 12 July 2025 20:35:24 +0000 (0:00:01.017) 0:05:36.660 ********* 2025-07-12 20:39:40.513342 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.513346 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.513349 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.513353 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.513357 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.513360 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.513364 | orchestrator | 2025-07-12 20:39:40.513368 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-07-12 20:39:40.513374 | orchestrator | Saturday 12 July 2025 20:35:26 +0000 (0:00:01.476) 0:05:38.138 ********* 2025-07-12 20:39:40.513378 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513385 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513392 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513396 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513401 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513413 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513420 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513424 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513438 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513445 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513461 | orchestrator | 2025-07-12 20:39:40.513464 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-07-12 20:39:40.513468 | orchestrator | Saturday 12 July 2025 20:35:31 +0000 (0:00:05.916) 0:05:44.054 ********* 2025-07-12 20:39:40.513472 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.513476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.513483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.513615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.513621 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.513625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.513629 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513637 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513641 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513655 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513659 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513669 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.513689 | orchestrator | 2025-07-12 20:39:40.513693 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-07-12 20:39:40.513697 | orchestrator | Saturday 12 July 2025 20:35:43 +0000 (0:00:12.002) 0:05:56.056 ********* 2025-07-12 20:39:40.513701 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.513705 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.513710 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.513714 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.513718 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.513722 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.513725 | orchestrator | 2025-07-12 20:39:40.513729 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-07-12 20:39:40.513733 | orchestrator | Saturday 12 July 2025 20:35:48 +0000 (0:00:04.394) 0:06:00.451 ********* 2025-07-12 20:39:40.513736 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 20:39:40.513740 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 20:39:40.513744 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 20:39:40.513747 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 20:39:40.513751 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-07-12 20:39:40.513755 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-07-12 20:39:40.513758 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 20:39:40.513762 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.513766 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 20:39:40.513769 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.513773 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-07-12 20:39:40.513777 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.513780 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 20:39:40.513784 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 20:39:40.513788 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-07-12 20:39:40.513792 | orchestrator | 2025-07-12 20:39:40.513795 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-07-12 20:39:40.513799 | orchestrator | Saturday 12 July 2025 20:35:54 +0000 (0:00:05.691) 0:06:06.142 ********* 2025-07-12 20:39:40.513803 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.513806 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.513810 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.513814 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.513817 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.513821 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.513824 | orchestrator | 2025-07-12 20:39:40.513828 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-07-12 20:39:40.513832 | orchestrator | Saturday 12 July 2025 20:35:54 +0000 (0:00:00.859) 0:06:07.002 ********* 2025-07-12 20:39:40.513835 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 20:39:40.513839 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 20:39:40.513846 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-07-12 20:39:40.513850 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 20:39:40.513854 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 20:39:40.513857 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-07-12 20:39:40.513861 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 20:39:40.513865 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 20:39:40.513871 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-07-12 20:39:40.513874 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:39:40.513878 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 20:39:40.513882 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.513885 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:39:40.513889 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 20:39:40.513893 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.513896 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:39:40.513900 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-07-12 20:39:40.513904 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.513909 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:39:40.513913 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:39:40.513917 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-07-12 20:39:40.513920 | orchestrator | 2025-07-12 20:39:40.513924 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-07-12 20:39:40.513928 | orchestrator | Saturday 12 July 2025 20:36:02 +0000 (0:00:07.752) 0:06:14.755 ********* 2025-07-12 20:39:40.513932 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:39:40.513935 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:39:40.513939 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:39:40.513943 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-07-12 20:39:40.513946 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:39:40.513950 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 20:39:40.513953 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-07-12 20:39:40.513957 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 20:39:40.513961 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-07-12 20:39:40.513964 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:39:40.513971 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:39:40.513975 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:39:40.513978 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-07-12 20:39:40.513982 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 20:39:40.513986 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.513989 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:39:40.513993 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-07-12 20:39:40.513997 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 20:39:40.514000 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.514004 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-07-12 20:39:40.514008 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.514079 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:39:40.514086 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:39:40.514090 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-07-12 20:39:40.514094 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:39:40.514098 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:39:40.514102 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-07-12 20:39:40.514106 | orchestrator | 2025-07-12 20:39:40.514110 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-07-12 20:39:40.514113 | orchestrator | Saturday 12 July 2025 20:36:10 +0000 (0:00:08.317) 0:06:23.072 ********* 2025-07-12 20:39:40.514117 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.514121 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.514124 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.514128 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.514135 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.514139 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.514143 | orchestrator | 2025-07-12 20:39:40.514146 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-07-12 20:39:40.514150 | orchestrator | Saturday 12 July 2025 20:36:11 +0000 (0:00:00.688) 0:06:23.760 ********* 2025-07-12 20:39:40.514154 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.514158 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.514161 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.514165 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.514168 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.514172 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.514176 | orchestrator | 2025-07-12 20:39:40.514180 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-07-12 20:39:40.514183 | orchestrator | Saturday 12 July 2025 20:36:12 +0000 (0:00:00.947) 0:06:24.707 ********* 2025-07-12 20:39:40.514187 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.514191 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.514194 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.514198 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.514202 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.514205 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.514209 | orchestrator | 2025-07-12 20:39:40.514212 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-07-12 20:39:40.514219 | orchestrator | Saturday 12 July 2025 20:36:14 +0000 (0:00:02.215) 0:06:26.923 ********* 2025-07-12 20:39:40.514227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.514231 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.514235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.514242 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.514246 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.514250 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.514258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.514268 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.514273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-07-12 20:39:40.514277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-07-12 20:39:40.514282 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.514286 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.514293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:39:40.514299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.514308 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.514313 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:39:40.514317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.514321 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.514325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-07-12 20:39:40.514330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-07-12 20:39:40.514334 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.514338 | orchestrator | 2025-07-12 20:39:40.514342 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-07-12 20:39:40.514347 | orchestrator | Saturday 12 July 2025 20:36:16 +0000 (0:00:01.807) 0:06:28.730 ********* 2025-07-12 20:39:40.514351 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-12 20:39:40.514355 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-12 20:39:40.514359 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.514363 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-12 20:39:40.514368 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-12 20:39:40.514372 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.514378 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-12 20:39:40.514386 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-12 20:39:40.514390 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.514394 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-12 20:39:40.514398 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-12 20:39:40.514403 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.514407 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-12 20:39:40.514411 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-12 20:39:40.514415 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.514419 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-12 20:39:40.514423 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-12 20:39:40.514427 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.514430 | orchestrator | 2025-07-12 20:39:40.514434 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-07-12 20:39:40.514438 | orchestrator | Saturday 12 July 2025 20:36:17 +0000 (0:00:00.681) 0:06:29.412 ********* 2025-07-12 20:39:40.514444 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514449 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514453 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514475 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514479 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514483 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514487 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514500 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514511 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-07-12 20:39:40.514523 | orchestrator | 2025-07-12 20:39:40.514526 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-07-12 20:39:40.514530 | orchestrator | Saturday 12 July 2025 20:36:20 +0000 (0:00:02.902) 0:06:32.314 ********* 2025-07-12 20:39:40.514537 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.514540 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.514544 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.514548 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.514551 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.514555 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.514559 | orchestrator | 2025-07-12 20:39:40.514562 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:39:40.514566 | orchestrator | Saturday 12 July 2025 20:36:20 +0000 (0:00:00.646) 0:06:32.961 ********* 2025-07-12 20:39:40.514570 | orchestrator | 2025-07-12 20:39:40.514574 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:39:40.514578 | orchestrator | Saturday 12 July 2025 20:36:21 +0000 (0:00:00.346) 0:06:33.307 ********* 2025-07-12 20:39:40.514581 | orchestrator | 2025-07-12 20:39:40.514585 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:39:40.514589 | orchestrator | Saturday 12 July 2025 20:36:21 +0000 (0:00:00.145) 0:06:33.452 ********* 2025-07-12 20:39:40.514592 | orchestrator | 2025-07-12 20:39:40.514598 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:39:40.514602 | orchestrator | Saturday 12 July 2025 20:36:21 +0000 (0:00:00.143) 0:06:33.596 ********* 2025-07-12 20:39:40.514606 | orchestrator | 2025-07-12 20:39:40.514610 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:39:40.514613 | orchestrator | Saturday 12 July 2025 20:36:21 +0000 (0:00:00.140) 0:06:33.737 ********* 2025-07-12 20:39:40.514617 | orchestrator | 2025-07-12 20:39:40.514621 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-07-12 20:39:40.514624 | orchestrator | Saturday 12 July 2025 20:36:21 +0000 (0:00:00.131) 0:06:33.868 ********* 2025-07-12 20:39:40.514628 | orchestrator | 2025-07-12 20:39:40.514632 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-07-12 20:39:40.514635 | orchestrator | Saturday 12 July 2025 20:36:21 +0000 (0:00:00.134) 0:06:34.003 ********* 2025-07-12 20:39:40.514639 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.514643 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:39:40.514647 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:39:40.514650 | orchestrator | 2025-07-12 20:39:40.514654 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-07-12 20:39:40.514658 | orchestrator | Saturday 12 July 2025 20:36:29 +0000 (0:00:07.644) 0:06:41.648 ********* 2025-07-12 20:39:40.514661 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.514665 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:39:40.514669 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:39:40.514673 | orchestrator | 2025-07-12 20:39:40.514678 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-07-12 20:39:40.514682 | orchestrator | Saturday 12 July 2025 20:36:49 +0000 (0:00:19.623) 0:07:01.272 ********* 2025-07-12 20:39:40.514686 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.514690 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.514693 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.514697 | orchestrator | 2025-07-12 20:39:40.514701 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-07-12 20:39:40.514704 | orchestrator | Saturday 12 July 2025 20:37:15 +0000 (0:00:26.817) 0:07:28.089 ********* 2025-07-12 20:39:40.514708 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.514712 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.514716 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.514719 | orchestrator | 2025-07-12 20:39:40.514723 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-07-12 20:39:40.514727 | orchestrator | Saturday 12 July 2025 20:37:58 +0000 (0:00:42.363) 0:08:10.453 ********* 2025-07-12 20:39:40.514730 | orchestrator | FAILED - RETRYING: [testbed-node-4]: Checking libvirt container is ready (10 retries left). 2025-07-12 20:39:40.514737 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.514741 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.514744 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.514748 | orchestrator | 2025-07-12 20:39:40.514752 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-07-12 20:39:40.514755 | orchestrator | Saturday 12 July 2025 20:38:04 +0000 (0:00:06.367) 0:08:16.821 ********* 2025-07-12 20:39:40.514759 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.514763 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.514766 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.514770 | orchestrator | 2025-07-12 20:39:40.514774 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-07-12 20:39:40.514777 | orchestrator | Saturday 12 July 2025 20:38:05 +0000 (0:00:00.846) 0:08:17.668 ********* 2025-07-12 20:39:40.514781 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:39:40.514785 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:39:40.514788 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:39:40.514792 | orchestrator | 2025-07-12 20:39:40.514796 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-07-12 20:39:40.514800 | orchestrator | Saturday 12 July 2025 20:38:29 +0000 (0:00:23.856) 0:08:41.525 ********* 2025-07-12 20:39:40.514803 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.514807 | orchestrator | 2025-07-12 20:39:40.514811 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-07-12 20:39:40.514814 | orchestrator | Saturday 12 July 2025 20:38:29 +0000 (0:00:00.137) 0:08:41.662 ********* 2025-07-12 20:39:40.514818 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.514822 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.514825 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.514829 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.514833 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.514837 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-07-12 20:39:40.514840 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:39:40.514844 | orchestrator | 2025-07-12 20:39:40.514848 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-07-12 20:39:40.514852 | orchestrator | Saturday 12 July 2025 20:38:51 +0000 (0:00:22.356) 0:09:04.019 ********* 2025-07-12 20:39:40.514855 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.514859 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.514863 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.514866 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.514870 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.514874 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.514877 | orchestrator | 2025-07-12 20:39:40.514881 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-07-12 20:39:40.514885 | orchestrator | Saturday 12 July 2025 20:39:01 +0000 (0:00:10.081) 0:09:14.101 ********* 2025-07-12 20:39:40.514888 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.514892 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.514896 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.514899 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.514903 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.514907 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-5 2025-07-12 20:39:40.514910 | orchestrator | 2025-07-12 20:39:40.514916 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-07-12 20:39:40.514920 | orchestrator | Saturday 12 July 2025 20:39:07 +0000 (0:00:05.301) 0:09:19.403 ********* 2025-07-12 20:39:40.514924 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:39:40.514928 | orchestrator | 2025-07-12 20:39:40.514931 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-07-12 20:39:40.514938 | orchestrator | Saturday 12 July 2025 20:39:18 +0000 (0:00:11.419) 0:09:30.822 ********* 2025-07-12 20:39:40.514942 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:39:40.514946 | orchestrator | 2025-07-12 20:39:40.514949 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-07-12 20:39:40.514953 | orchestrator | Saturday 12 July 2025 20:39:20 +0000 (0:00:01.428) 0:09:32.251 ********* 2025-07-12 20:39:40.514957 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.514960 | orchestrator | 2025-07-12 20:39:40.514964 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-07-12 20:39:40.514968 | orchestrator | Saturday 12 July 2025 20:39:21 +0000 (0:00:01.397) 0:09:33.648 ********* 2025-07-12 20:39:40.514971 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:39:40.514975 | orchestrator | 2025-07-12 20:39:40.514979 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-07-12 20:39:40.514985 | orchestrator | Saturday 12 July 2025 20:39:31 +0000 (0:00:10.009) 0:09:43.658 ********* 2025-07-12 20:39:40.514989 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:39:40.514993 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:39:40.514996 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:39:40.515000 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:40.515004 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:39:40.515008 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:39:40.515011 | orchestrator | 2025-07-12 20:39:40.515015 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-07-12 20:39:40.515019 | orchestrator | 2025-07-12 20:39:40.515022 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-07-12 20:39:40.515026 | orchestrator | Saturday 12 July 2025 20:39:33 +0000 (0:00:01.755) 0:09:45.413 ********* 2025-07-12 20:39:40.515046 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:40.515050 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:39:40.515054 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:39:40.515058 | orchestrator | 2025-07-12 20:39:40.515061 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-07-12 20:39:40.515065 | orchestrator | 2025-07-12 20:39:40.515069 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-07-12 20:39:40.515072 | orchestrator | Saturday 12 July 2025 20:39:34 +0000 (0:00:01.107) 0:09:46.521 ********* 2025-07-12 20:39:40.515076 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.515079 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.515083 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.515087 | orchestrator | 2025-07-12 20:39:40.515090 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-07-12 20:39:40.515094 | orchestrator | 2025-07-12 20:39:40.515098 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-07-12 20:39:40.515102 | orchestrator | Saturday 12 July 2025 20:39:34 +0000 (0:00:00.550) 0:09:47.071 ********* 2025-07-12 20:39:40.515105 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-07-12 20:39:40.515109 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-07-12 20:39:40.515112 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-07-12 20:39:40.515116 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-07-12 20:39:40.515120 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-07-12 20:39:40.515124 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-07-12 20:39:40.515127 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:39:40.515131 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-07-12 20:39:40.515135 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-07-12 20:39:40.515138 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-07-12 20:39:40.515142 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-07-12 20:39:40.515148 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-07-12 20:39:40.515152 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-07-12 20:39:40.515156 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:39:40.515160 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-07-12 20:39:40.515163 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-07-12 20:39:40.515167 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-07-12 20:39:40.515171 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-07-12 20:39:40.515174 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-07-12 20:39:40.515178 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-07-12 20:39:40.515182 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:39:40.515185 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-07-12 20:39:40.515189 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-07-12 20:39:40.515193 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-07-12 20:39:40.515196 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-07-12 20:39:40.515200 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-07-12 20:39:40.515204 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-07-12 20:39:40.515207 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.515211 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-07-12 20:39:40.515215 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-07-12 20:39:40.515218 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-07-12 20:39:40.515224 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-07-12 20:39:40.515228 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-07-12 20:39:40.515232 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-07-12 20:39:40.515236 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.515239 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-07-12 20:39:40.515243 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-07-12 20:39:40.515247 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-07-12 20:39:40.515250 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-07-12 20:39:40.515254 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-07-12 20:39:40.515257 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-07-12 20:39:40.515261 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.515265 | orchestrator | 2025-07-12 20:39:40.515268 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-07-12 20:39:40.515272 | orchestrator | 2025-07-12 20:39:40.515276 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-07-12 20:39:40.515280 | orchestrator | Saturday 12 July 2025 20:39:36 +0000 (0:00:01.365) 0:09:48.437 ********* 2025-07-12 20:39:40.515283 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-07-12 20:39:40.515289 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-07-12 20:39:40.515293 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.515297 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-07-12 20:39:40.515300 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-07-12 20:39:40.515304 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.515308 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-07-12 20:39:40.515311 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-07-12 20:39:40.515315 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.515319 | orchestrator | 2025-07-12 20:39:40.515322 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-07-12 20:39:40.515329 | orchestrator | 2025-07-12 20:39:40.515333 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-07-12 20:39:40.515336 | orchestrator | Saturday 12 July 2025 20:39:37 +0000 (0:00:00.761) 0:09:49.198 ********* 2025-07-12 20:39:40.515340 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.515344 | orchestrator | 2025-07-12 20:39:40.515347 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-07-12 20:39:40.515351 | orchestrator | 2025-07-12 20:39:40.515355 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-07-12 20:39:40.515358 | orchestrator | Saturday 12 July 2025 20:39:37 +0000 (0:00:00.701) 0:09:49.899 ********* 2025-07-12 20:39:40.515362 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:40.515366 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:40.515369 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:40.515373 | orchestrator | 2025-07-12 20:39:40.515377 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:39:40.515380 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:39:40.515385 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-07-12 20:39:40.515388 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-12 20:39:40.515392 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-07-12 20:39:40.515396 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-07-12 20:39:40.515400 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-07-12 20:39:40.515403 | orchestrator | testbed-node-5 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-07-12 20:39:40.515407 | orchestrator | 2025-07-12 20:39:40.515411 | orchestrator | 2025-07-12 20:39:40.515415 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:39:40.515418 | orchestrator | Saturday 12 July 2025 20:39:38 +0000 (0:00:00.484) 0:09:50.384 ********* 2025-07-12 20:39:40.515422 | orchestrator | =============================================================================== 2025-07-12 20:39:40.515426 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 42.36s 2025-07-12 20:39:40.515429 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 28.99s 2025-07-12 20:39:40.515433 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 26.82s 2025-07-12 20:39:40.515437 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 25.09s 2025-07-12 20:39:40.515440 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.86s 2025-07-12 20:39:40.515444 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.36s 2025-07-12 20:39:40.515448 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 19.62s 2025-07-12 20:39:40.515454 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 18.93s 2025-07-12 20:39:40.515458 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 16.56s 2025-07-12 20:39:40.515461 | orchestrator | nova : Restart nova-api container -------------------------------------- 14.52s 2025-07-12 20:39:40.515465 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.36s 2025-07-12 20:39:40.515469 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.19s 2025-07-12 20:39:40.515475 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.03s 2025-07-12 20:39:40.515479 | orchestrator | nova-cell : Copying over nova.conf ------------------------------------- 12.00s 2025-07-12 20:39:40.515482 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.42s 2025-07-12 20:39:40.515486 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 11.00s 2025-07-12 20:39:40.515490 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.08s 2025-07-12 20:39:40.515493 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.01s 2025-07-12 20:39:40.515497 | orchestrator | nova : Copying over nova.conf ------------------------------------------- 9.14s 2025-07-12 20:39:40.515501 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 8.32s 2025-07-12 20:39:40.515542 | orchestrator | 2025-07-12 20:39:40 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:40.517171 | orchestrator | 2025-07-12 20:39:40 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:40.517255 | orchestrator | 2025-07-12 20:39:40 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:43.560253 | orchestrator | 2025-07-12 20:39:43 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:43.562257 | orchestrator | 2025-07-12 20:39:43 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state STARTED 2025-07-12 20:39:43.562359 | orchestrator | 2025-07-12 20:39:43 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:46.608284 | orchestrator | 2025-07-12 20:39:46 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:46.612595 | orchestrator | 2025-07-12 20:39:46 | INFO  | Task 07ce8fa8-91dc-431e-8ef7-bf28ea6837d8 is in state SUCCESS 2025-07-12 20:39:46.615266 | orchestrator | 2025-07-12 20:39:46.615303 | orchestrator | 2025-07-12 20:39:46.615316 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:39:46.615328 | orchestrator | 2025-07-12 20:39:46.615339 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:39:46.615351 | orchestrator | Saturday 12 July 2025 20:37:25 +0000 (0:00:00.442) 0:00:00.442 ********* 2025-07-12 20:39:46.615362 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:46.615374 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:39:46.615419 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:39:46.615433 | orchestrator | 2025-07-12 20:39:46.615471 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:39:46.615484 | orchestrator | Saturday 12 July 2025 20:37:25 +0000 (0:00:00.421) 0:00:00.864 ********* 2025-07-12 20:39:46.615495 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-07-12 20:39:46.615542 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-07-12 20:39:46.615554 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-07-12 20:39:46.615589 | orchestrator | 2025-07-12 20:39:46.615602 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-07-12 20:39:46.615613 | orchestrator | 2025-07-12 20:39:46.615624 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-12 20:39:46.615635 | orchestrator | Saturday 12 July 2025 20:37:26 +0000 (0:00:00.538) 0:00:01.403 ********* 2025-07-12 20:39:46.615647 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:39:46.615658 | orchestrator | 2025-07-12 20:39:46.615669 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-07-12 20:39:46.615680 | orchestrator | Saturday 12 July 2025 20:37:26 +0000 (0:00:00.610) 0:00:02.014 ********* 2025-07-12 20:39:46.615694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.615747 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.615761 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.615773 | orchestrator | 2025-07-12 20:39:46.615784 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-07-12 20:39:46.615795 | orchestrator | Saturday 12 July 2025 20:37:27 +0000 (0:00:00.998) 0:00:03.013 ********* 2025-07-12 20:39:46.615805 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-07-12 20:39:46.615817 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-07-12 20:39:46.615828 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:39:46.615838 | orchestrator | 2025-07-12 20:39:46.615849 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-07-12 20:39:46.615861 | orchestrator | Saturday 12 July 2025 20:37:28 +0000 (0:00:00.946) 0:00:03.959 ********* 2025-07-12 20:39:46.615873 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:39:46.615885 | orchestrator | 2025-07-12 20:39:46.615898 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-07-12 20:39:46.615910 | orchestrator | Saturday 12 July 2025 20:37:29 +0000 (0:00:00.802) 0:00:04.762 ********* 2025-07-12 20:39:46.615938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.615953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.615980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.615992 | orchestrator | 2025-07-12 20:39:46.616005 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-07-12 20:39:46.616017 | orchestrator | Saturday 12 July 2025 20:37:31 +0000 (0:00:01.481) 0:00:06.243 ********* 2025-07-12 20:39:46.616056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:39:46.616071 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:46.616084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:39:46.616097 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:46.616259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:39:46.616327 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:46.616341 | orchestrator | 2025-07-12 20:39:46.616352 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-07-12 20:39:46.616362 | orchestrator | Saturday 12 July 2025 20:37:31 +0000 (0:00:00.398) 0:00:06.641 ********* 2025-07-12 20:39:46.616373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:39:46.616408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:39:46.616421 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:46.616431 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:46.616449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-07-12 20:39:46.616461 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:46.616473 | orchestrator | 2025-07-12 20:39:46.616484 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-07-12 20:39:46.616496 | orchestrator | Saturday 12 July 2025 20:37:32 +0000 (0:00:00.869) 0:00:07.511 ********* 2025-07-12 20:39:46.616507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.616528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.616540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.616559 | orchestrator | 2025-07-12 20:39:46.616570 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-07-12 20:39:46.616582 | orchestrator | Saturday 12 July 2025 20:37:33 +0000 (0:00:01.261) 0:00:08.773 ********* 2025-07-12 20:39:46.616593 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.616610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.616622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.616634 | orchestrator | 2025-07-12 20:39:46.616645 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-07-12 20:39:46.616657 | orchestrator | Saturday 12 July 2025 20:37:35 +0000 (0:00:01.447) 0:00:10.220 ********* 2025-07-12 20:39:46.616667 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:46.616679 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:46.616690 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:46.616701 | orchestrator | 2025-07-12 20:39:46.616712 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-07-12 20:39:46.616724 | orchestrator | Saturday 12 July 2025 20:37:35 +0000 (0:00:00.576) 0:00:10.797 ********* 2025-07-12 20:39:46.616736 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 20:39:46.616747 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 20:39:46.616759 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-07-12 20:39:46.616776 | orchestrator | 2025-07-12 20:39:46.616788 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-07-12 20:39:46.616799 | orchestrator | Saturday 12 July 2025 20:37:36 +0000 (0:00:01.274) 0:00:12.072 ********* 2025-07-12 20:39:46.616811 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 20:39:46.616831 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 20:39:46.616842 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-07-12 20:39:46.616854 | orchestrator | 2025-07-12 20:39:46.616865 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-07-12 20:39:46.616877 | orchestrator | Saturday 12 July 2025 20:37:38 +0000 (0:00:01.236) 0:00:13.309 ********* 2025-07-12 20:39:46.616888 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-07-12 20:39:46.616923 | orchestrator | 2025-07-12 20:39:46.616959 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-07-12 20:39:46.616971 | orchestrator | Saturday 12 July 2025 20:37:38 +0000 (0:00:00.838) 0:00:14.148 ********* 2025-07-12 20:39:46.617021 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-07-12 20:39:46.617058 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-07-12 20:39:46.617078 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:46.617097 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:39:46.617119 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:39:46.617146 | orchestrator | 2025-07-12 20:39:46.617164 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-07-12 20:39:46.617183 | orchestrator | Saturday 12 July 2025 20:37:39 +0000 (0:00:00.668) 0:00:14.817 ********* 2025-07-12 20:39:46.617201 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:46.617221 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:46.617238 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:46.617257 | orchestrator | 2025-07-12 20:39:46.617273 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-07-12 20:39:46.617284 | orchestrator | Saturday 12 July 2025 20:37:40 +0000 (0:00:00.552) 0:00:15.369 ********* 2025-07-12 20:39:46.617297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 569367, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9861312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 569367, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9861312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 569367, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9861312, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 569118, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9511304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 569118, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9511304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617424 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 569118, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9511304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 569111, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9481304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 569111, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9481304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617463 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 569111, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9481304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 569078, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9431303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617500 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 569078, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9431303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 569078, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9431303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 569024, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9391303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 569024, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9391303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 569024, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9391303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 569089, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9451303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 569089, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9451303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617607 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 569089, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9451303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617618 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 569053, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9421303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617630 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 569053, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9421303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 569053, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9421303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 569064, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9421303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.617676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 569064, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9421303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 569064, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9421303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 569044, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9401302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 569044, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9401302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 569044, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9401302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618406 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 569083, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9441304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 569083, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9441304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 569083, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9441304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 569092, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9461303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 569092, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9461303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 569092, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9461303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618525 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 569029, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9401302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 569029, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9401302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 569029, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9401302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 569010, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9371302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618587 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 569010, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9371302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 569010, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9371302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 569113, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9491303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 569113, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9491303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 569113, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9491303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 569100, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9471304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 569100, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9471304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 569100, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9471304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618697 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 569048, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9411302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 569048, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9411302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 569048, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9411302, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 569105, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9471304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 569105, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9471304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 569105, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9471304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 569015, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9381301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618810 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 569015, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9381301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 569015, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9381301, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 569070, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9431303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 569070, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9431303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 569070, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9431303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 569098, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9461303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 569098, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9461303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618907 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 569098, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9461303, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 569216, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9661307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 569216, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9661307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618949 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 569216, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9661307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 569284, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.977131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.618984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 569284, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.977131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619000 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 569284, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.977131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 569187, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9661307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619022 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 569187, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9661307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619066 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 569187, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9661307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619086 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 569230, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9681308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 569230, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9681308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619145 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 569230, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9681308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 569300, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.979131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 569300, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.979131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 569300, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.979131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 569318, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.981131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 569318, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.981131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 569318, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.981131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619287 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 569310, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.980131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 569310, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.980131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 569310, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.980131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 569291, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9781308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 569291, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9781308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619362 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 569291, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9781308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619374 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 569236, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9681308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 569236, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9681308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 569236, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9681308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 569149, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9541304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 569149, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9541304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 569149, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9541304, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 569330, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.984131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 569330, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.984131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 569330, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.984131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 569172, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9611306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 569172, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9611306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619528 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 569172, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9611306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 569237, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.976131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 569237, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.976131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 569237, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.976131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 569153, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9551306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619604 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 569153, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9551306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619615 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 569153, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9551306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619631 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 569142, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9531305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 569142, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9531305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 569142, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9531305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 569160, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9591305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 569160, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9591305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619701 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 569160, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9591305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 569224, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9671307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 569224, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9671307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 569224, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9671307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619751 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 569356, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.985131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 569356, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.985131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 569356, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.985131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 569305, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.979131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 569305, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.979131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 569305, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.979131, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.619975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 569135, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9521306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.620004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 569135, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9521306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.620016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 569135, 'dev': 75, 'nlink': 1, 'atime': 1752315970.0, 'mtime': 1752315970.0, 'ctime': 1752349798.9521306, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-07-12 20:39:46.620027 | orchestrator | 2025-07-12 20:39:46.620064 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-07-12 20:39:46.620076 | orchestrator | Saturday 12 July 2025 20:38:17 +0000 (0:00:37.247) 0:00:52.617 ********* 2025-07-12 20:39:46.620087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.620104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.620116 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:12.0.2.20250711', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-07-12 20:39:46.620134 | orchestrator | 2025-07-12 20:39:46.620145 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-07-12 20:39:46.620156 | orchestrator | Saturday 12 July 2025 20:38:18 +0000 (0:00:00.921) 0:00:53.538 ********* 2025-07-12 20:39:46.620167 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:46.620178 | orchestrator | 2025-07-12 20:39:46.620189 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-07-12 20:39:46.620200 | orchestrator | Saturday 12 July 2025 20:38:20 +0000 (0:00:02.221) 0:00:55.760 ********* 2025-07-12 20:39:46.620211 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:46.620222 | orchestrator | 2025-07-12 20:39:46.620232 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 20:39:46.620249 | orchestrator | Saturday 12 July 2025 20:38:22 +0000 (0:00:02.016) 0:00:57.776 ********* 2025-07-12 20:39:46.620260 | orchestrator | 2025-07-12 20:39:46.620271 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 20:39:46.620281 | orchestrator | Saturday 12 July 2025 20:38:22 +0000 (0:00:00.276) 0:00:58.053 ********* 2025-07-12 20:39:46.620292 | orchestrator | 2025-07-12 20:39:46.620303 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-07-12 20:39:46.620313 | orchestrator | Saturday 12 July 2025 20:38:22 +0000 (0:00:00.068) 0:00:58.121 ********* 2025-07-12 20:39:46.620324 | orchestrator | 2025-07-12 20:39:46.620335 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-07-12 20:39:46.620345 | orchestrator | Saturday 12 July 2025 20:38:22 +0000 (0:00:00.068) 0:00:58.190 ********* 2025-07-12 20:39:46.620356 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:46.620367 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:46.620377 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:39:46.620388 | orchestrator | 2025-07-12 20:39:46.620399 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-07-12 20:39:46.620410 | orchestrator | Saturday 12 July 2025 20:38:24 +0000 (0:00:01.968) 0:01:00.158 ********* 2025-07-12 20:39:46.620420 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:46.620431 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:46.620442 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-07-12 20:39:46.620453 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-07-12 20:39:46.620464 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-07-12 20:39:46.620474 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:46.620485 | orchestrator | 2025-07-12 20:39:46.620496 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-07-12 20:39:46.620507 | orchestrator | Saturday 12 July 2025 20:39:03 +0000 (0:00:38.205) 0:01:38.364 ********* 2025-07-12 20:39:46.620518 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:46.620529 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:39:46.620539 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:39:46.620550 | orchestrator | 2025-07-12 20:39:46.620560 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-07-12 20:39:46.620571 | orchestrator | Saturday 12 July 2025 20:39:38 +0000 (0:00:34.895) 0:02:13.260 ********* 2025-07-12 20:39:46.620582 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:39:46.620593 | orchestrator | 2025-07-12 20:39:46.620603 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-07-12 20:39:46.620614 | orchestrator | Saturday 12 July 2025 20:39:40 +0000 (0:00:02.249) 0:02:15.509 ********* 2025-07-12 20:39:46.620625 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:46.620635 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:39:46.620646 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:39:46.620665 | orchestrator | 2025-07-12 20:39:46.620676 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-07-12 20:39:46.620687 | orchestrator | Saturday 12 July 2025 20:39:40 +0000 (0:00:00.322) 0:02:15.832 ********* 2025-07-12 20:39:46.620703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-07-12 20:39:46.620717 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-07-12 20:39:46.620729 | orchestrator | 2025-07-12 20:39:46.620740 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-07-12 20:39:46.620750 | orchestrator | Saturday 12 July 2025 20:39:42 +0000 (0:00:02.316) 0:02:18.148 ********* 2025-07-12 20:39:46.620761 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:39:46.620772 | orchestrator | 2025-07-12 20:39:46.620782 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:39:46.620795 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:39:46.620807 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:39:46.620818 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:39:46.620829 | orchestrator | 2025-07-12 20:39:46.620840 | orchestrator | 2025-07-12 20:39:46.620850 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:39:46.620861 | orchestrator | Saturday 12 July 2025 20:39:43 +0000 (0:00:00.305) 0:02:18.454 ********* 2025-07-12 20:39:46.620872 | orchestrator | =============================================================================== 2025-07-12 20:39:46.620883 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.21s 2025-07-12 20:39:46.620893 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.25s 2025-07-12 20:39:46.620904 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 34.90s 2025-07-12 20:39:46.620920 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.32s 2025-07-12 20:39:46.620931 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.25s 2025-07-12 20:39:46.620942 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.22s 2025-07-12 20:39:46.620953 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.02s 2025-07-12 20:39:46.620963 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.97s 2025-07-12 20:39:46.620974 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.48s 2025-07-12 20:39:46.620985 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.45s 2025-07-12 20:39:46.620995 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.27s 2025-07-12 20:39:46.621006 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.26s 2025-07-12 20:39:46.621017 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.24s 2025-07-12 20:39:46.621027 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 1.00s 2025-07-12 20:39:46.621119 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.95s 2025-07-12 20:39:46.621138 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.92s 2025-07-12 20:39:46.621162 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.87s 2025-07-12 20:39:46.621173 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.84s 2025-07-12 20:39:46.621184 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.80s 2025-07-12 20:39:46.621195 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.67s 2025-07-12 20:39:46.621205 | orchestrator | 2025-07-12 20:39:46 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:49.664743 | orchestrator | 2025-07-12 20:39:49 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:49.664862 | orchestrator | 2025-07-12 20:39:49 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:52.711474 | orchestrator | 2025-07-12 20:39:52 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:52.711591 | orchestrator | 2025-07-12 20:39:52 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:55.758318 | orchestrator | 2025-07-12 20:39:55 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:55.758421 | orchestrator | 2025-07-12 20:39:55 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:39:58.800086 | orchestrator | 2025-07-12 20:39:58 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:39:58.800237 | orchestrator | 2025-07-12 20:39:58 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:01.845695 | orchestrator | 2025-07-12 20:40:01 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:01.845772 | orchestrator | 2025-07-12 20:40:01 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:04.887795 | orchestrator | 2025-07-12 20:40:04 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:04.887938 | orchestrator | 2025-07-12 20:40:04 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:07.930853 | orchestrator | 2025-07-12 20:40:07 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:07.931117 | orchestrator | 2025-07-12 20:40:07 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:10.976631 | orchestrator | 2025-07-12 20:40:10 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:10.976748 | orchestrator | 2025-07-12 20:40:10 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:14.026286 | orchestrator | 2025-07-12 20:40:14 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:14.026381 | orchestrator | 2025-07-12 20:40:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:17.071296 | orchestrator | 2025-07-12 20:40:17 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:17.071386 | orchestrator | 2025-07-12 20:40:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:20.102369 | orchestrator | 2025-07-12 20:40:20 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:20.102433 | orchestrator | 2025-07-12 20:40:20 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:23.149411 | orchestrator | 2025-07-12 20:40:23 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:23.149483 | orchestrator | 2025-07-12 20:40:23 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:26.194815 | orchestrator | 2025-07-12 20:40:26 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:26.194924 | orchestrator | 2025-07-12 20:40:26 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:29.242584 | orchestrator | 2025-07-12 20:40:29 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:29.242687 | orchestrator | 2025-07-12 20:40:29 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:32.282664 | orchestrator | 2025-07-12 20:40:32 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:32.282761 | orchestrator | 2025-07-12 20:40:32 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:35.313610 | orchestrator | 2025-07-12 20:40:35 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:35.313686 | orchestrator | 2025-07-12 20:40:35 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:38.360851 | orchestrator | 2025-07-12 20:40:38 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:38.360917 | orchestrator | 2025-07-12 20:40:38 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:41.411414 | orchestrator | 2025-07-12 20:40:41 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:41.411477 | orchestrator | 2025-07-12 20:40:41 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:44.452738 | orchestrator | 2025-07-12 20:40:44 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:44.452816 | orchestrator | 2025-07-12 20:40:44 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:47.494910 | orchestrator | 2025-07-12 20:40:47 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:47.494982 | orchestrator | 2025-07-12 20:40:47 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:50.544036 | orchestrator | 2025-07-12 20:40:50 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:50.544195 | orchestrator | 2025-07-12 20:40:50 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:53.602798 | orchestrator | 2025-07-12 20:40:53 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:53.602895 | orchestrator | 2025-07-12 20:40:53 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:56.648861 | orchestrator | 2025-07-12 20:40:56 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:56.648997 | orchestrator | 2025-07-12 20:40:56 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:40:59.700503 | orchestrator | 2025-07-12 20:40:59 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:40:59.700615 | orchestrator | 2025-07-12 20:40:59 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:02.739725 | orchestrator | 2025-07-12 20:41:02 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:02.739830 | orchestrator | 2025-07-12 20:41:02 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:05.788768 | orchestrator | 2025-07-12 20:41:05 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:05.789948 | orchestrator | 2025-07-12 20:41:05 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:08.831474 | orchestrator | 2025-07-12 20:41:08 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:08.831585 | orchestrator | 2025-07-12 20:41:08 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:11.877726 | orchestrator | 2025-07-12 20:41:11 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:11.877814 | orchestrator | 2025-07-12 20:41:11 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:14.917189 | orchestrator | 2025-07-12 20:41:14 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:14.917323 | orchestrator | 2025-07-12 20:41:14 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:17.959004 | orchestrator | 2025-07-12 20:41:17 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:17.959199 | orchestrator | 2025-07-12 20:41:17 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:21.006519 | orchestrator | 2025-07-12 20:41:21 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:21.006643 | orchestrator | 2025-07-12 20:41:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:24.047755 | orchestrator | 2025-07-12 20:41:24 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:24.047842 | orchestrator | 2025-07-12 20:41:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:27.090188 | orchestrator | 2025-07-12 20:41:27 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:27.090299 | orchestrator | 2025-07-12 20:41:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:30.135935 | orchestrator | 2025-07-12 20:41:30 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:30.136049 | orchestrator | 2025-07-12 20:41:30 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:33.186917 | orchestrator | 2025-07-12 20:41:33 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:33.187019 | orchestrator | 2025-07-12 20:41:33 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:36.234979 | orchestrator | 2025-07-12 20:41:36 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:36.235133 | orchestrator | 2025-07-12 20:41:36 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:39.275601 | orchestrator | 2025-07-12 20:41:39 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:39.275727 | orchestrator | 2025-07-12 20:41:39 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:42.324914 | orchestrator | 2025-07-12 20:41:42 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:42.325023 | orchestrator | 2025-07-12 20:41:42 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:45.362160 | orchestrator | 2025-07-12 20:41:45 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:45.362272 | orchestrator | 2025-07-12 20:41:45 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:48.405236 | orchestrator | 2025-07-12 20:41:48 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:48.405340 | orchestrator | 2025-07-12 20:41:48 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:51.449250 | orchestrator | 2025-07-12 20:41:51 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:51.449333 | orchestrator | 2025-07-12 20:41:51 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:54.493436 | orchestrator | 2025-07-12 20:41:54 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:54.493535 | orchestrator | 2025-07-12 20:41:54 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:41:57.535774 | orchestrator | 2025-07-12 20:41:57 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:41:57.535883 | orchestrator | 2025-07-12 20:41:57 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:00.581606 | orchestrator | 2025-07-12 20:42:00 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:00.581694 | orchestrator | 2025-07-12 20:42:00 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:03.624107 | orchestrator | 2025-07-12 20:42:03 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:03.624205 | orchestrator | 2025-07-12 20:42:03 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:06.668831 | orchestrator | 2025-07-12 20:42:06 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:06.668938 | orchestrator | 2025-07-12 20:42:06 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:09.715318 | orchestrator | 2025-07-12 20:42:09 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:09.715422 | orchestrator | 2025-07-12 20:42:09 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:12.770940 | orchestrator | 2025-07-12 20:42:12 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:12.771044 | orchestrator | 2025-07-12 20:42:12 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:15.812948 | orchestrator | 2025-07-12 20:42:15 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:15.813063 | orchestrator | 2025-07-12 20:42:15 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:18.848181 | orchestrator | 2025-07-12 20:42:18 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:18.848285 | orchestrator | 2025-07-12 20:42:18 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:21.896030 | orchestrator | 2025-07-12 20:42:21 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:21.896178 | orchestrator | 2025-07-12 20:42:21 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:24.946492 | orchestrator | 2025-07-12 20:42:24 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:24.946603 | orchestrator | 2025-07-12 20:42:24 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:27.995527 | orchestrator | 2025-07-12 20:42:27 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:27.995630 | orchestrator | 2025-07-12 20:42:27 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:31.041222 | orchestrator | 2025-07-12 20:42:31 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:31.041357 | orchestrator | 2025-07-12 20:42:31 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:34.093665 | orchestrator | 2025-07-12 20:42:34 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:34.093801 | orchestrator | 2025-07-12 20:42:34 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:37.139084 | orchestrator | 2025-07-12 20:42:37 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state STARTED 2025-07-12 20:42:37.139255 | orchestrator | 2025-07-12 20:42:37 | INFO  | Wait 1 second(s) until the next check 2025-07-12 20:42:40.183759 | orchestrator | 2025-07-12 20:42:40 | INFO  | Task 2172a244-f1bf-4a6d-b71b-a28e3a21906e is in state SUCCESS 2025-07-12 20:42:40.185314 | orchestrator | 2025-07-12 20:42:40.185375 | orchestrator | 2025-07-12 20:42:40.185415 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:42:40.185438 | orchestrator | 2025-07-12 20:42:40.185459 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:42:40.185511 | orchestrator | Saturday 12 July 2025 20:38:00 +0000 (0:00:00.329) 0:00:00.329 ********* 2025-07-12 20:42:40.185529 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:42:40.185549 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:42:40.185568 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:42:40.185808 | orchestrator | 2025-07-12 20:42:40.185835 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:42:40.185968 | orchestrator | Saturday 12 July 2025 20:38:00 +0000 (0:00:00.374) 0:00:00.704 ********* 2025-07-12 20:42:40.187219 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-07-12 20:42:40.187236 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-07-12 20:42:40.187247 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-07-12 20:42:40.187258 | orchestrator | 2025-07-12 20:42:40.187269 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-07-12 20:42:40.187280 | orchestrator | 2025-07-12 20:42:40.187291 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:42:40.187328 | orchestrator | Saturday 12 July 2025 20:38:00 +0000 (0:00:00.419) 0:00:01.124 ********* 2025-07-12 20:42:40.187348 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:42:40.187370 | orchestrator | 2025-07-12 20:42:40.187388 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-07-12 20:42:40.187406 | orchestrator | Saturday 12 July 2025 20:38:01 +0000 (0:00:00.588) 0:00:01.713 ********* 2025-07-12 20:42:40.187426 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-07-12 20:42:40.187444 | orchestrator | 2025-07-12 20:42:40.187463 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-07-12 20:42:40.187482 | orchestrator | Saturday 12 July 2025 20:38:04 +0000 (0:00:03.294) 0:00:05.007 ********* 2025-07-12 20:42:40.187500 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-07-12 20:42:40.187511 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-07-12 20:42:40.187522 | orchestrator | 2025-07-12 20:42:40.187533 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-07-12 20:42:40.187557 | orchestrator | Saturday 12 July 2025 20:38:11 +0000 (0:00:06.181) 0:00:11.189 ********* 2025-07-12 20:42:40.187569 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-07-12 20:42:40.187580 | orchestrator | 2025-07-12 20:42:40.187590 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-07-12 20:42:40.187601 | orchestrator | Saturday 12 July 2025 20:38:14 +0000 (0:00:03.187) 0:00:14.376 ********* 2025-07-12 20:42:40.187611 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-07-12 20:42:40.187622 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-12 20:42:40.187656 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-07-12 20:42:40.187668 | orchestrator | 2025-07-12 20:42:40.187679 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-07-12 20:42:40.187690 | orchestrator | Saturday 12 July 2025 20:38:22 +0000 (0:00:08.147) 0:00:22.523 ********* 2025-07-12 20:42:40.187700 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-07-12 20:42:40.187711 | orchestrator | 2025-07-12 20:42:40.187722 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-07-12 20:42:40.187732 | orchestrator | Saturday 12 July 2025 20:38:25 +0000 (0:00:03.333) 0:00:25.856 ********* 2025-07-12 20:42:40.187789 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-12 20:42:40.187804 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-07-12 20:42:40.187816 | orchestrator | 2025-07-12 20:42:40.187828 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-07-12 20:42:40.187840 | orchestrator | Saturday 12 July 2025 20:38:33 +0000 (0:00:07.408) 0:00:33.265 ********* 2025-07-12 20:42:40.187870 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-07-12 20:42:40.187884 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-07-12 20:42:40.187896 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-07-12 20:42:40.187935 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-07-12 20:42:40.187949 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-07-12 20:42:40.187960 | orchestrator | 2025-07-12 20:42:40.187970 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:42:40.187981 | orchestrator | Saturday 12 July 2025 20:38:48 +0000 (0:00:15.638) 0:00:48.904 ********* 2025-07-12 20:42:40.187991 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:42:40.188002 | orchestrator | 2025-07-12 20:42:40.188013 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-07-12 20:42:40.188023 | orchestrator | Saturday 12 July 2025 20:38:49 +0000 (0:00:00.594) 0:00:49.499 ********* 2025-07-12 20:42:40.188034 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.188045 | orchestrator | 2025-07-12 20:42:40.188055 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-07-12 20:42:40.188066 | orchestrator | Saturday 12 July 2025 20:38:54 +0000 (0:00:05.392) 0:00:54.892 ********* 2025-07-12 20:42:40.188076 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.188087 | orchestrator | 2025-07-12 20:42:40.188162 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-07-12 20:42:40.188237 | orchestrator | Saturday 12 July 2025 20:38:58 +0000 (0:00:03.660) 0:00:58.552 ********* 2025-07-12 20:42:40.188250 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:42:40.188261 | orchestrator | 2025-07-12 20:42:40.188272 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-07-12 20:42:40.188282 | orchestrator | Saturday 12 July 2025 20:39:01 +0000 (0:00:03.204) 0:01:01.756 ********* 2025-07-12 20:42:40.188293 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-07-12 20:42:40.188303 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-07-12 20:42:40.188314 | orchestrator | 2025-07-12 20:42:40.188383 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-07-12 20:42:40.188394 | orchestrator | Saturday 12 July 2025 20:39:10 +0000 (0:00:09.348) 0:01:11.104 ********* 2025-07-12 20:42:40.188405 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-07-12 20:42:40.188416 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-07-12 20:42:40.188437 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-07-12 20:42:40.188449 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-07-12 20:42:40.188461 | orchestrator | 2025-07-12 20:42:40.188471 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-07-12 20:42:40.188482 | orchestrator | Saturday 12 July 2025 20:39:25 +0000 (0:00:14.992) 0:01:26.097 ********* 2025-07-12 20:42:40.188493 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.188503 | orchestrator | 2025-07-12 20:42:40.188514 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-07-12 20:42:40.188525 | orchestrator | Saturday 12 July 2025 20:39:30 +0000 (0:00:04.444) 0:01:30.542 ********* 2025-07-12 20:42:40.188535 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.188546 | orchestrator | 2025-07-12 20:42:40.188556 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-07-12 20:42:40.188567 | orchestrator | Saturday 12 July 2025 20:39:35 +0000 (0:00:04.817) 0:01:35.359 ********* 2025-07-12 20:42:40.188588 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:42:40.188598 | orchestrator | 2025-07-12 20:42:40.188609 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-07-12 20:42:40.188620 | orchestrator | Saturday 12 July 2025 20:39:35 +0000 (0:00:00.243) 0:01:35.603 ********* 2025-07-12 20:42:40.188630 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.188641 | orchestrator | 2025-07-12 20:42:40.188652 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:42:40.188662 | orchestrator | Saturday 12 July 2025 20:39:40 +0000 (0:00:05.014) 0:01:40.617 ********* 2025-07-12 20:42:40.188672 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:42:40.188682 | orchestrator | 2025-07-12 20:42:40.188691 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-07-12 20:42:40.188701 | orchestrator | Saturday 12 July 2025 20:39:41 +0000 (0:00:01.227) 0:01:41.845 ********* 2025-07-12 20:42:40.188710 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.188720 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.188729 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.188739 | orchestrator | 2025-07-12 20:42:40.188748 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-07-12 20:42:40.188757 | orchestrator | Saturday 12 July 2025 20:39:47 +0000 (0:00:05.437) 0:01:47.283 ********* 2025-07-12 20:42:40.188767 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.188776 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.188786 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.188795 | orchestrator | 2025-07-12 20:42:40.188805 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-07-12 20:42:40.188814 | orchestrator | Saturday 12 July 2025 20:39:51 +0000 (0:00:04.653) 0:01:51.937 ********* 2025-07-12 20:42:40.188823 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.188843 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.188853 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.188863 | orchestrator | 2025-07-12 20:42:40.188872 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-07-12 20:42:40.188882 | orchestrator | Saturday 12 July 2025 20:39:52 +0000 (0:00:00.796) 0:01:52.734 ********* 2025-07-12 20:42:40.188891 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:42:40.188901 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:42:40.188910 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:42:40.188920 | orchestrator | 2025-07-12 20:42:40.188929 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-07-12 20:42:40.188939 | orchestrator | Saturday 12 July 2025 20:39:54 +0000 (0:00:01.918) 0:01:54.652 ********* 2025-07-12 20:42:40.188948 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.188958 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.188967 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.188977 | orchestrator | 2025-07-12 20:42:40.188986 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-07-12 20:42:40.188996 | orchestrator | Saturday 12 July 2025 20:39:55 +0000 (0:00:01.323) 0:01:55.975 ********* 2025-07-12 20:42:40.189007 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.189023 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.189080 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.189118 | orchestrator | 2025-07-12 20:42:40.189130 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-07-12 20:42:40.189146 | orchestrator | Saturday 12 July 2025 20:39:56 +0000 (0:00:01.162) 0:01:57.138 ********* 2025-07-12 20:42:40.189162 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.189178 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.189194 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.189210 | orchestrator | 2025-07-12 20:42:40.189267 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-07-12 20:42:40.189279 | orchestrator | Saturday 12 July 2025 20:39:58 +0000 (0:00:01.938) 0:01:59.076 ********* 2025-07-12 20:42:40.189298 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.189308 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.189317 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.189327 | orchestrator | 2025-07-12 20:42:40.189336 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-07-12 20:42:40.189346 | orchestrator | Saturday 12 July 2025 20:40:00 +0000 (0:00:01.733) 0:02:00.810 ********* 2025-07-12 20:42:40.189355 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:42:40.189365 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:42:40.189374 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:42:40.189383 | orchestrator | 2025-07-12 20:42:40.189393 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-07-12 20:42:40.189402 | orchestrator | Saturday 12 July 2025 20:40:01 +0000 (0:00:00.639) 0:02:01.450 ********* 2025-07-12 20:42:40.189412 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:42:40.189421 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:42:40.189430 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:42:40.189439 | orchestrator | 2025-07-12 20:42:40.189449 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:42:40.189465 | orchestrator | Saturday 12 July 2025 20:40:03 +0000 (0:00:02.663) 0:02:04.113 ********* 2025-07-12 20:42:40.189474 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:42:40.189484 | orchestrator | 2025-07-12 20:42:40.189494 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-07-12 20:42:40.189503 | orchestrator | Saturday 12 July 2025 20:40:04 +0000 (0:00:00.734) 0:02:04.847 ********* 2025-07-12 20:42:40.189524 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:42:40.189534 | orchestrator | 2025-07-12 20:42:40.189544 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-07-12 20:42:40.189553 | orchestrator | Saturday 12 July 2025 20:40:08 +0000 (0:00:03.755) 0:02:08.603 ********* 2025-07-12 20:42:40.189563 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:42:40.189572 | orchestrator | 2025-07-12 20:42:40.189582 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-07-12 20:42:40.189592 | orchestrator | Saturday 12 July 2025 20:40:11 +0000 (0:00:03.083) 0:02:11.687 ********* 2025-07-12 20:42:40.189602 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-07-12 20:42:40.189611 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-07-12 20:42:40.189621 | orchestrator | 2025-07-12 20:42:40.189631 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-07-12 20:42:40.189640 | orchestrator | Saturday 12 July 2025 20:40:17 +0000 (0:00:06.352) 0:02:18.039 ********* 2025-07-12 20:42:40.189650 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:42:40.189659 | orchestrator | 2025-07-12 20:42:40.189669 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-07-12 20:42:40.189679 | orchestrator | Saturday 12 July 2025 20:40:20 +0000 (0:00:03.134) 0:02:21.174 ********* 2025-07-12 20:42:40.189688 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:42:40.189698 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:42:40.189707 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:42:40.189750 | orchestrator | 2025-07-12 20:42:40.189760 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-07-12 20:42:40.189770 | orchestrator | Saturday 12 July 2025 20:40:21 +0000 (0:00:00.373) 0:02:21.547 ********* 2025-07-12 20:42:40.189789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.189863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.189884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.189908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.189926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.189944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.189968 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.189978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.190052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.190073 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.190084 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.190223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.190244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.190273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.190344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.190363 | orchestrator | 2025-07-12 20:42:40.190374 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-07-12 20:42:40.190384 | orchestrator | Saturday 12 July 2025 20:40:24 +0000 (0:00:02.645) 0:02:24.193 ********* 2025-07-12 20:42:40.190393 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:42:40.190403 | orchestrator | 2025-07-12 20:42:40.190412 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-07-12 20:42:40.190422 | orchestrator | Saturday 12 July 2025 20:40:24 +0000 (0:00:00.335) 0:02:24.529 ********* 2025-07-12 20:42:40.190431 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:42:40.190448 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:42:40.190465 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:42:40.190482 | orchestrator | 2025-07-12 20:42:40.190498 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-07-12 20:42:40.190514 | orchestrator | Saturday 12 July 2025 20:40:24 +0000 (0:00:00.291) 0:02:24.821 ********* 2025-07-12 20:42:40.190540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:42:40.190609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:42:40.190630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.190641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.190651 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:42:40.190661 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:42:40.190721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:42:40.190738 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:42:40.190751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.190767 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.190775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:42:40.190783 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:42:40.190817 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:42:40.190827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:42:40.190840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.190848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.190867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:42:40.190882 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:42:40.190896 | orchestrator | 2025-07-12 20:42:40.190911 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:42:40.190925 | orchestrator | Saturday 12 July 2025 20:40:25 +0000 (0:00:00.777) 0:02:25.598 ********* 2025-07-12 20:42:40.190937 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:42:40.190945 | orchestrator | 2025-07-12 20:42:40.190953 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-07-12 20:42:40.190961 | orchestrator | Saturday 12 July 2025 20:40:26 +0000 (0:00:00.774) 0:02:26.373 ********* 2025-07-12 20:42:40.190969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.191007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.191026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.191049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.191064 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.191077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.191086 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191194 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191208 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191240 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191248 | orchestrator | 2025-07-12 20:42:40.191256 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-07-12 20:42:40.191276 | orchestrator | Saturday 12 July 2025 20:40:31 +0000 (0:00:05.018) 0:02:31.392 ********* 2025-07-12 20:42:40.191285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:42:40.191293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:42:40.191301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:42:40.191333 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:42:40.191352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:42:40.191375 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:42:40.191389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:42:40.191424 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:42:40.191447 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:42:40.191474 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:42:40.191487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:42:40.191512 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:42:40.191519 | orchestrator | 2025-07-12 20:42:40.191527 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-07-12 20:42:40.191535 | orchestrator | Saturday 12 July 2025 20:40:31 +0000 (0:00:00.723) 0:02:32.115 ********* 2025-07-12 20:42:40.191548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:42:40.191569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:42:40.191601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:42:40.191645 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:42:40.191660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:42:40.191668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:42:40.191684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191712 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:42:40.191720 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:42:40.191728 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-07-12 20:42:40.191743 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-07-12 20:42:40.191756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-07-12 20:42:40.191797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-07-12 20:42:40.191810 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:42:40.191818 | orchestrator | 2025-07-12 20:42:40.191826 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-07-12 20:42:40.191834 | orchestrator | Saturday 12 July 2025 20:40:32 +0000 (0:00:00.877) 0:02:32.993 ********* 2025-07-12 20:42:40.191842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.191850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.191859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.191883 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.191898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.191918 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.191931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191953 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191980 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.191992 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192004 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192045 | orchestrator | 2025-07-12 20:42:40.192056 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-07-12 20:42:40.192065 | orchestrator | Saturday 12 July 2025 20:40:37 +0000 (0:00:05.183) 0:02:38.177 ********* 2025-07-12 20:42:40.192078 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-12 20:42:40.192124 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-12 20:42:40.192138 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-07-12 20:42:40.192152 | orchestrator | 2025-07-12 20:42:40.192160 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-07-12 20:42:40.192168 | orchestrator | Saturday 12 July 2025 20:40:39 +0000 (0:00:01.648) 0:02:39.825 ********* 2025-07-12 20:42:40.192182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.192195 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.192204 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.192212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.192224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.192247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.192268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192351 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192379 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.192387 | orchestrator | 2025-07-12 20:42:40.192395 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-07-12 20:42:40.192403 | orchestrator | Saturday 12 July 2025 20:40:56 +0000 (0:00:16.631) 0:02:56.457 ********* 2025-07-12 20:42:40.192411 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.192425 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.192439 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.192452 | orchestrator | 2025-07-12 20:42:40.192466 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-07-12 20:42:40.192479 | orchestrator | Saturday 12 July 2025 20:40:57 +0000 (0:00:01.515) 0:02:57.972 ********* 2025-07-12 20:42:40.192492 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-12 20:42:40.192506 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-12 20:42:40.192520 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-12 20:42:40.192534 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-12 20:42:40.192548 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-12 20:42:40.192557 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-12 20:42:40.192565 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-12 20:42:40.192582 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-12 20:42:40.192590 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-12 20:42:40.192598 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-12 20:42:40.192611 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-12 20:42:40.192624 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-12 20:42:40.192637 | orchestrator | 2025-07-12 20:42:40.192651 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-07-12 20:42:40.192665 | orchestrator | Saturday 12 July 2025 20:41:03 +0000 (0:00:05.205) 0:03:03.177 ********* 2025-07-12 20:42:40.192678 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-12 20:42:40.192690 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-12 20:42:40.192698 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-12 20:42:40.192706 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-12 20:42:40.192714 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-12 20:42:40.192721 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-12 20:42:40.192729 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-12 20:42:40.192737 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-12 20:42:40.192744 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-12 20:42:40.192752 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-12 20:42:40.192760 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-12 20:42:40.192768 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-12 20:42:40.192775 | orchestrator | 2025-07-12 20:42:40.192783 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-07-12 20:42:40.192791 | orchestrator | Saturday 12 July 2025 20:41:08 +0000 (0:00:05.129) 0:03:08.307 ********* 2025-07-12 20:42:40.192799 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-07-12 20:42:40.192806 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-07-12 20:42:40.192814 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-07-12 20:42:40.192822 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-07-12 20:42:40.192830 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-07-12 20:42:40.192838 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-07-12 20:42:40.192852 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-07-12 20:42:40.192872 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-07-12 20:42:40.192885 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-07-12 20:42:40.192898 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-07-12 20:42:40.192912 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-07-12 20:42:40.192925 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-07-12 20:42:40.192934 | orchestrator | 2025-07-12 20:42:40.192942 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-07-12 20:42:40.192950 | orchestrator | Saturday 12 July 2025 20:41:13 +0000 (0:00:05.014) 0:03:13.322 ********* 2025-07-12 20:42:40.192962 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.192977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.192986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-07-12 20:42:40.192994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.193007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.193016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-07-12 20:42:40.193028 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.193043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.193051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.193059 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.193071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.193138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-07-12 20:42:40.193156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.193175 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.193189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-07-12 20:42:40.193203 | orchestrator | 2025-07-12 20:42:40.193217 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-07-12 20:42:40.193230 | orchestrator | Saturday 12 July 2025 20:41:16 +0000 (0:00:03.529) 0:03:16.851 ********* 2025-07-12 20:42:40.193243 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:42:40.193252 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:42:40.193261 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:42:40.193269 | orchestrator | 2025-07-12 20:42:40.193277 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-07-12 20:42:40.193285 | orchestrator | Saturday 12 July 2025 20:41:16 +0000 (0:00:00.310) 0:03:17.162 ********* 2025-07-12 20:42:40.193292 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.193300 | orchestrator | 2025-07-12 20:42:40.193308 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-07-12 20:42:40.193316 | orchestrator | Saturday 12 July 2025 20:41:18 +0000 (0:00:01.989) 0:03:19.151 ********* 2025-07-12 20:42:40.193323 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.193331 | orchestrator | 2025-07-12 20:42:40.193339 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-07-12 20:42:40.193346 | orchestrator | Saturday 12 July 2025 20:41:21 +0000 (0:00:02.417) 0:03:21.569 ********* 2025-07-12 20:42:40.193354 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.193362 | orchestrator | 2025-07-12 20:42:40.193373 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-07-12 20:42:40.193386 | orchestrator | Saturday 12 July 2025 20:41:23 +0000 (0:00:02.021) 0:03:23.590 ********* 2025-07-12 20:42:40.193398 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.193406 | orchestrator | 2025-07-12 20:42:40.193414 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-07-12 20:42:40.193428 | orchestrator | Saturday 12 July 2025 20:41:25 +0000 (0:00:02.102) 0:03:25.693 ********* 2025-07-12 20:42:40.193442 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.193455 | orchestrator | 2025-07-12 20:42:40.193465 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-12 20:42:40.193473 | orchestrator | Saturday 12 July 2025 20:41:45 +0000 (0:00:19.950) 0:03:45.644 ********* 2025-07-12 20:42:40.193481 | orchestrator | 2025-07-12 20:42:40.193489 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-12 20:42:40.193502 | orchestrator | Saturday 12 July 2025 20:41:45 +0000 (0:00:00.078) 0:03:45.722 ********* 2025-07-12 20:42:40.193510 | orchestrator | 2025-07-12 20:42:40.193518 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-07-12 20:42:40.193525 | orchestrator | Saturday 12 July 2025 20:41:45 +0000 (0:00:00.073) 0:03:45.796 ********* 2025-07-12 20:42:40.193533 | orchestrator | 2025-07-12 20:42:40.193541 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-07-12 20:42:40.193554 | orchestrator | Saturday 12 July 2025 20:41:45 +0000 (0:00:00.068) 0:03:45.865 ********* 2025-07-12 20:42:40.193562 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.193570 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.193578 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.193585 | orchestrator | 2025-07-12 20:42:40.193593 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-07-12 20:42:40.193601 | orchestrator | Saturday 12 July 2025 20:42:02 +0000 (0:00:17.046) 0:04:02.911 ********* 2025-07-12 20:42:40.193611 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.193623 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.193635 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.193642 | orchestrator | 2025-07-12 20:42:40.193649 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-07-12 20:42:40.193660 | orchestrator | Saturday 12 July 2025 20:42:09 +0000 (0:00:06.748) 0:04:09.659 ********* 2025-07-12 20:42:40.193671 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.193682 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.193694 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.193704 | orchestrator | 2025-07-12 20:42:40.193716 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-07-12 20:42:40.193723 | orchestrator | Saturday 12 July 2025 20:42:17 +0000 (0:00:08.300) 0:04:17.959 ********* 2025-07-12 20:42:40.193734 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.193741 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.193747 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.193754 | orchestrator | 2025-07-12 20:42:40.193760 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-07-12 20:42:40.193767 | orchestrator | Saturday 12 July 2025 20:42:28 +0000 (0:00:10.240) 0:04:28.199 ********* 2025-07-12 20:42:40.193773 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:42:40.193780 | orchestrator | changed: [testbed-node-2] 2025-07-12 20:42:40.193786 | orchestrator | changed: [testbed-node-1] 2025-07-12 20:42:40.193793 | orchestrator | 2025-07-12 20:42:40.193799 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:42:40.193806 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-07-12 20:42:40.193814 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:42:40.193820 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-07-12 20:42:40.193827 | orchestrator | 2025-07-12 20:42:40.193834 | orchestrator | 2025-07-12 20:42:40.193845 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:42:40.193857 | orchestrator | Saturday 12 July 2025 20:42:38 +0000 (0:00:10.542) 0:04:38.742 ********* 2025-07-12 20:42:40.193868 | orchestrator | =============================================================================== 2025-07-12 20:42:40.193879 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 19.95s 2025-07-12 20:42:40.193886 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.05s 2025-07-12 20:42:40.193893 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.63s 2025-07-12 20:42:40.193902 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.64s 2025-07-12 20:42:40.193920 | orchestrator | octavia : Add rules for security groups -------------------------------- 14.99s 2025-07-12 20:42:40.193931 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.54s 2025-07-12 20:42:40.193941 | orchestrator | octavia : Restart octavia-housekeeping container ----------------------- 10.24s 2025-07-12 20:42:40.193948 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.35s 2025-07-12 20:42:40.193954 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.30s 2025-07-12 20:42:40.193961 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.15s 2025-07-12 20:42:40.193967 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.41s 2025-07-12 20:42:40.193974 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 6.75s 2025-07-12 20:42:40.193980 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.35s 2025-07-12 20:42:40.193987 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.18s 2025-07-12 20:42:40.193994 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 5.44s 2025-07-12 20:42:40.194000 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.39s 2025-07-12 20:42:40.194007 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.21s 2025-07-12 20:42:40.194013 | orchestrator | octavia : Copying over config.json files for services ------------------- 5.18s 2025-07-12 20:42:40.194046 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 5.13s 2025-07-12 20:42:40.194058 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.02s 2025-07-12 20:42:40.194069 | orchestrator | 2025-07-12 20:42:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:42:43.221237 | orchestrator | 2025-07-12 20:42:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:42:46.272362 | orchestrator | 2025-07-12 20:42:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:42:49.310467 | orchestrator | 2025-07-12 20:42:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:42:52.361844 | orchestrator | 2025-07-12 20:42:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:42:55.404473 | orchestrator | 2025-07-12 20:42:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:42:58.444557 | orchestrator | 2025-07-12 20:42:58 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:01.489020 | orchestrator | 2025-07-12 20:43:01 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:04.531504 | orchestrator | 2025-07-12 20:43:04 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:07.565446 | orchestrator | 2025-07-12 20:43:07 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:10.608967 | orchestrator | 2025-07-12 20:43:10 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:13.648720 | orchestrator | 2025-07-12 20:43:13 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:16.695972 | orchestrator | 2025-07-12 20:43:16 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:19.736948 | orchestrator | 2025-07-12 20:43:19 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:22.772394 | orchestrator | 2025-07-12 20:43:22 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:25.819436 | orchestrator | 2025-07-12 20:43:25 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:28.861756 | orchestrator | 2025-07-12 20:43:28 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:31.903824 | orchestrator | 2025-07-12 20:43:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:34.941604 | orchestrator | 2025-07-12 20:43:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:37.983823 | orchestrator | 2025-07-12 20:43:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-07-12 20:43:41.028936 | orchestrator | 2025-07-12 20:43:41.323331 | orchestrator | 2025-07-12 20:43:41.330269 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Sat Jul 12 20:43:41 UTC 2025 2025-07-12 20:43:41.330359 | orchestrator | 2025-07-12 20:43:41.670589 | orchestrator | ok: Runtime: 0:35:31.741834 2025-07-12 20:43:41.915496 | 2025-07-12 20:43:41.915645 | TASK [Bootstrap services] 2025-07-12 20:43:42.702752 | orchestrator | 2025-07-12 20:43:42.702902 | orchestrator | # BOOTSTRAP 2025-07-12 20:43:42.702916 | orchestrator | 2025-07-12 20:43:42.702925 | orchestrator | + set -e 2025-07-12 20:43:42.702932 | orchestrator | + echo 2025-07-12 20:43:42.702941 | orchestrator | + echo '# BOOTSTRAP' 2025-07-12 20:43:42.702952 | orchestrator | + echo 2025-07-12 20:43:42.702982 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-07-12 20:43:42.712021 | orchestrator | + set -e 2025-07-12 20:43:42.712720 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-07-12 20:43:47.326730 | orchestrator | 2025-07-12 20:43:47 | INFO  | It takes a moment until task daac1250-5f20-4dc7-9260-33d41ced584a (flavor-manager) has been started and output is visible here. 2025-07-12 20:43:55.228668 | orchestrator | 2025-07-12 20:43:51 | INFO  | Flavor SCS-1V-4 created 2025-07-12 20:43:55.228804 | orchestrator | 2025-07-12 20:43:51 | INFO  | Flavor SCS-2V-8 created 2025-07-12 20:43:55.228820 | orchestrator | 2025-07-12 20:43:51 | INFO  | Flavor SCS-4V-16 created 2025-07-12 20:43:55.228830 | orchestrator | 2025-07-12 20:43:51 | INFO  | Flavor SCS-8V-32 created 2025-07-12 20:43:55.228838 | orchestrator | 2025-07-12 20:43:51 | INFO  | Flavor SCS-1V-2 created 2025-07-12 20:43:55.228847 | orchestrator | 2025-07-12 20:43:52 | INFO  | Flavor SCS-2V-4 created 2025-07-12 20:43:55.228855 | orchestrator | 2025-07-12 20:43:52 | INFO  | Flavor SCS-4V-8 created 2025-07-12 20:43:55.228865 | orchestrator | 2025-07-12 20:43:52 | INFO  | Flavor SCS-8V-16 created 2025-07-12 20:43:55.228882 | orchestrator | 2025-07-12 20:43:52 | INFO  | Flavor SCS-16V-32 created 2025-07-12 20:43:55.228890 | orchestrator | 2025-07-12 20:43:52 | INFO  | Flavor SCS-1V-8 created 2025-07-12 20:43:55.228899 | orchestrator | 2025-07-12 20:43:52 | INFO  | Flavor SCS-2V-16 created 2025-07-12 20:43:55.228906 | orchestrator | 2025-07-12 20:43:52 | INFO  | Flavor SCS-4V-32 created 2025-07-12 20:43:55.228914 | orchestrator | 2025-07-12 20:43:52 | INFO  | Flavor SCS-1L-1 created 2025-07-12 20:43:55.228923 | orchestrator | 2025-07-12 20:43:53 | INFO  | Flavor SCS-2V-4-20s created 2025-07-12 20:43:55.228931 | orchestrator | 2025-07-12 20:43:53 | INFO  | Flavor SCS-4V-16-100s created 2025-07-12 20:43:55.228939 | orchestrator | 2025-07-12 20:43:53 | INFO  | Flavor SCS-1V-4-10 created 2025-07-12 20:43:55.228947 | orchestrator | 2025-07-12 20:43:53 | INFO  | Flavor SCS-2V-8-20 created 2025-07-12 20:43:55.228955 | orchestrator | 2025-07-12 20:43:53 | INFO  | Flavor SCS-4V-16-50 created 2025-07-12 20:43:55.228963 | orchestrator | 2025-07-12 20:43:53 | INFO  | Flavor SCS-8V-32-100 created 2025-07-12 20:43:55.228971 | orchestrator | 2025-07-12 20:43:53 | INFO  | Flavor SCS-1V-2-5 created 2025-07-12 20:43:55.228979 | orchestrator | 2025-07-12 20:43:53 | INFO  | Flavor SCS-2V-4-10 created 2025-07-12 20:43:55.228987 | orchestrator | 2025-07-12 20:43:54 | INFO  | Flavor SCS-4V-8-20 created 2025-07-12 20:43:55.228995 | orchestrator | 2025-07-12 20:43:54 | INFO  | Flavor SCS-8V-16-50 created 2025-07-12 20:43:55.229004 | orchestrator | 2025-07-12 20:43:54 | INFO  | Flavor SCS-16V-32-100 created 2025-07-12 20:43:55.229012 | orchestrator | 2025-07-12 20:43:54 | INFO  | Flavor SCS-1V-8-20 created 2025-07-12 20:43:55.229019 | orchestrator | 2025-07-12 20:43:54 | INFO  | Flavor SCS-2V-16-50 created 2025-07-12 20:43:55.229028 | orchestrator | 2025-07-12 20:43:54 | INFO  | Flavor SCS-4V-32-100 created 2025-07-12 20:43:55.229036 | orchestrator | 2025-07-12 20:43:54 | INFO  | Flavor SCS-1L-1-5 created 2025-07-12 20:43:57.459281 | orchestrator | 2025-07-12 20:43:57 | INFO  | Trying to run play bootstrap-basic in environment openstack 2025-07-12 20:44:07.662200 | orchestrator | 2025-07-12 20:44:07 | INFO  | Task cbf590f0-2c42-4120-ac66-de33d8edd6f3 (bootstrap-basic) was prepared for execution. 2025-07-12 20:44:07.662354 | orchestrator | 2025-07-12 20:44:07 | INFO  | It takes a moment until task cbf590f0-2c42-4120-ac66-de33d8edd6f3 (bootstrap-basic) has been started and output is visible here. 2025-07-12 20:45:08.229466 | orchestrator | 2025-07-12 20:45:08.229599 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2025-07-12 20:45:08.229617 | orchestrator | 2025-07-12 20:45:08.229630 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-07-12 20:45:08.229641 | orchestrator | Saturday 12 July 2025 20:44:11 +0000 (0:00:00.077) 0:00:00.077 ********* 2025-07-12 20:45:08.229653 | orchestrator | ok: [localhost] 2025-07-12 20:45:08.229665 | orchestrator | 2025-07-12 20:45:08.229676 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2025-07-12 20:45:08.229689 | orchestrator | Saturday 12 July 2025 20:44:13 +0000 (0:00:01.896) 0:00:01.973 ********* 2025-07-12 20:45:08.229700 | orchestrator | ok: [localhost] 2025-07-12 20:45:08.229711 | orchestrator | 2025-07-12 20:45:08.229722 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2025-07-12 20:45:08.229733 | orchestrator | Saturday 12 July 2025 20:44:22 +0000 (0:00:08.341) 0:00:10.315 ********* 2025-07-12 20:45:08.229744 | orchestrator | changed: [localhost] 2025-07-12 20:45:08.229756 | orchestrator | 2025-07-12 20:45:08.229767 | orchestrator | TASK [Get volume type local] *************************************************** 2025-07-12 20:45:08.229778 | orchestrator | Saturday 12 July 2025 20:44:29 +0000 (0:00:07.735) 0:00:18.050 ********* 2025-07-12 20:45:08.229789 | orchestrator | ok: [localhost] 2025-07-12 20:45:08.229801 | orchestrator | 2025-07-12 20:45:08.229812 | orchestrator | TASK [Create volume type local] ************************************************ 2025-07-12 20:45:08.229823 | orchestrator | Saturday 12 July 2025 20:44:37 +0000 (0:00:07.622) 0:00:25.673 ********* 2025-07-12 20:45:08.229834 | orchestrator | changed: [localhost] 2025-07-12 20:45:08.229849 | orchestrator | 2025-07-12 20:45:08.229861 | orchestrator | TASK [Create public network] *************************************************** 2025-07-12 20:45:08.229872 | orchestrator | Saturday 12 July 2025 20:44:44 +0000 (0:00:06.814) 0:00:32.487 ********* 2025-07-12 20:45:08.229883 | orchestrator | changed: [localhost] 2025-07-12 20:45:08.229893 | orchestrator | 2025-07-12 20:45:08.229904 | orchestrator | TASK [Set public network to default] ******************************************* 2025-07-12 20:45:08.229915 | orchestrator | Saturday 12 July 2025 20:44:49 +0000 (0:00:05.206) 0:00:37.694 ********* 2025-07-12 20:45:08.229926 | orchestrator | changed: [localhost] 2025-07-12 20:45:08.229937 | orchestrator | 2025-07-12 20:45:08.229959 | orchestrator | TASK [Create public subnet] **************************************************** 2025-07-12 20:45:08.229972 | orchestrator | Saturday 12 July 2025 20:44:55 +0000 (0:00:06.332) 0:00:44.027 ********* 2025-07-12 20:45:08.229985 | orchestrator | changed: [localhost] 2025-07-12 20:45:08.229998 | orchestrator | 2025-07-12 20:45:08.230010 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2025-07-12 20:45:08.230079 | orchestrator | Saturday 12 July 2025 20:45:00 +0000 (0:00:04.334) 0:00:48.361 ********* 2025-07-12 20:45:08.230092 | orchestrator | changed: [localhost] 2025-07-12 20:45:08.230104 | orchestrator | 2025-07-12 20:45:08.230117 | orchestrator | TASK [Create manager role] ***************************************************** 2025-07-12 20:45:08.230130 | orchestrator | Saturday 12 July 2025 20:45:04 +0000 (0:00:04.072) 0:00:52.433 ********* 2025-07-12 20:45:08.230143 | orchestrator | ok: [localhost] 2025-07-12 20:45:08.230155 | orchestrator | 2025-07-12 20:45:08.230168 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:45:08.230199 | orchestrator | localhost : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 20:45:08.230213 | orchestrator | 2025-07-12 20:45:08.230225 | orchestrator | 2025-07-12 20:45:08.230237 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:45:08.230250 | orchestrator | Saturday 12 July 2025 20:45:07 +0000 (0:00:03.705) 0:00:56.139 ********* 2025-07-12 20:45:08.230290 | orchestrator | =============================================================================== 2025-07-12 20:45:08.230303 | orchestrator | Get volume type LUKS ---------------------------------------------------- 8.34s 2025-07-12 20:45:08.230316 | orchestrator | Create volume type LUKS ------------------------------------------------- 7.74s 2025-07-12 20:45:08.230329 | orchestrator | Get volume type local --------------------------------------------------- 7.62s 2025-07-12 20:45:08.230342 | orchestrator | Create volume type local ------------------------------------------------ 6.81s 2025-07-12 20:45:08.230354 | orchestrator | Set public network to default ------------------------------------------- 6.33s 2025-07-12 20:45:08.230367 | orchestrator | Create public network --------------------------------------------------- 5.21s 2025-07-12 20:45:08.230379 | orchestrator | Create public subnet ---------------------------------------------------- 4.33s 2025-07-12 20:45:08.230390 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.07s 2025-07-12 20:45:08.230401 | orchestrator | Create manager role ----------------------------------------------------- 3.71s 2025-07-12 20:45:08.230413 | orchestrator | Gathering Facts --------------------------------------------------------- 1.90s 2025-07-12 20:45:10.597417 | orchestrator | 2025-07-12 20:45:10 | INFO  | It takes a moment until task 57851330-9cb2-428c-8992-10462fe4fca2 (image-manager) has been started and output is visible here. 2025-07-12 20:45:51.174100 | orchestrator | 2025-07-12 20:45:14 | INFO  | Processing image 'Cirros 0.6.2' 2025-07-12 20:45:51.174353 | orchestrator | 2025-07-12 20:45:14 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2025-07-12 20:45:51.174395 | orchestrator | 2025-07-12 20:45:14 | INFO  | Importing image Cirros 0.6.2 2025-07-12 20:45:51.174415 | orchestrator | 2025-07-12 20:45:14 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-12 20:45:51.174435 | orchestrator | 2025-07-12 20:45:15 | INFO  | Waiting for image to leave queued state... 2025-07-12 20:45:51.174455 | orchestrator | 2025-07-12 20:45:17 | INFO  | Waiting for import to complete... 2025-07-12 20:45:51.174474 | orchestrator | 2025-07-12 20:45:28 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2025-07-12 20:45:51.174494 | orchestrator | 2025-07-12 20:45:28 | INFO  | Checking parameters of 'Cirros 0.6.2' 2025-07-12 20:45:51.174513 | orchestrator | 2025-07-12 20:45:28 | INFO  | Setting internal_version = 0.6.2 2025-07-12 20:45:51.174534 | orchestrator | 2025-07-12 20:45:28 | INFO  | Setting image_original_user = cirros 2025-07-12 20:45:51.174553 | orchestrator | 2025-07-12 20:45:28 | INFO  | Adding tag os:cirros 2025-07-12 20:45:51.174574 | orchestrator | 2025-07-12 20:45:28 | INFO  | Setting property architecture: x86_64 2025-07-12 20:45:51.174593 | orchestrator | 2025-07-12 20:45:29 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 20:45:51.174612 | orchestrator | 2025-07-12 20:45:29 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 20:45:51.174631 | orchestrator | 2025-07-12 20:45:29 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 20:45:51.174651 | orchestrator | 2025-07-12 20:45:29 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 20:45:51.174669 | orchestrator | 2025-07-12 20:45:30 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 20:45:51.174689 | orchestrator | 2025-07-12 20:45:30 | INFO  | Setting property os_distro: cirros 2025-07-12 20:45:51.174703 | orchestrator | 2025-07-12 20:45:30 | INFO  | Setting property replace_frequency: never 2025-07-12 20:45:51.174714 | orchestrator | 2025-07-12 20:45:30 | INFO  | Setting property uuid_validity: none 2025-07-12 20:45:51.174725 | orchestrator | 2025-07-12 20:45:30 | INFO  | Setting property provided_until: none 2025-07-12 20:45:51.174761 | orchestrator | 2025-07-12 20:45:31 | INFO  | Setting property image_description: Cirros 2025-07-12 20:45:51.174784 | orchestrator | 2025-07-12 20:45:31 | INFO  | Setting property image_name: Cirros 2025-07-12 20:45:51.174796 | orchestrator | 2025-07-12 20:45:31 | INFO  | Setting property internal_version: 0.6.2 2025-07-12 20:45:51.174811 | orchestrator | 2025-07-12 20:45:31 | INFO  | Setting property image_original_user: cirros 2025-07-12 20:45:51.174822 | orchestrator | 2025-07-12 20:45:31 | INFO  | Setting property os_version: 0.6.2 2025-07-12 20:45:51.174834 | orchestrator | 2025-07-12 20:45:32 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2025-07-12 20:45:51.174847 | orchestrator | 2025-07-12 20:45:32 | INFO  | Setting property image_build_date: 2023-05-30 2025-07-12 20:45:51.174858 | orchestrator | 2025-07-12 20:45:32 | INFO  | Checking status of 'Cirros 0.6.2' 2025-07-12 20:45:51.174868 | orchestrator | 2025-07-12 20:45:32 | INFO  | Checking visibility of 'Cirros 0.6.2' 2025-07-12 20:45:51.174879 | orchestrator | 2025-07-12 20:45:32 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2025-07-12 20:45:51.174890 | orchestrator | 2025-07-12 20:45:32 | INFO  | Processing image 'Cirros 0.6.3' 2025-07-12 20:45:51.174901 | orchestrator | 2025-07-12 20:45:32 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2025-07-12 20:45:51.174912 | orchestrator | 2025-07-12 20:45:32 | INFO  | Importing image Cirros 0.6.3 2025-07-12 20:45:51.174923 | orchestrator | 2025-07-12 20:45:32 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-12 20:45:51.174934 | orchestrator | 2025-07-12 20:45:34 | INFO  | Waiting for image to leave queued state... 2025-07-12 20:45:51.174944 | orchestrator | 2025-07-12 20:45:36 | INFO  | Waiting for import to complete... 2025-07-12 20:45:51.174955 | orchestrator | 2025-07-12 20:45:46 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2025-07-12 20:45:51.174988 | orchestrator | 2025-07-12 20:45:46 | INFO  | Checking parameters of 'Cirros 0.6.3' 2025-07-12 20:45:51.174999 | orchestrator | 2025-07-12 20:45:46 | INFO  | Setting internal_version = 0.6.3 2025-07-12 20:45:51.175011 | orchestrator | 2025-07-12 20:45:46 | INFO  | Setting image_original_user = cirros 2025-07-12 20:45:51.175021 | orchestrator | 2025-07-12 20:45:46 | INFO  | Adding tag os:cirros 2025-07-12 20:45:51.175032 | orchestrator | 2025-07-12 20:45:46 | INFO  | Setting property architecture: x86_64 2025-07-12 20:45:51.175043 | orchestrator | 2025-07-12 20:45:46 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 20:45:51.175054 | orchestrator | 2025-07-12 20:45:47 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 20:45:51.175064 | orchestrator | 2025-07-12 20:45:47 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 20:45:51.175075 | orchestrator | 2025-07-12 20:45:47 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 20:45:51.175086 | orchestrator | 2025-07-12 20:45:47 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 20:45:51.175097 | orchestrator | 2025-07-12 20:45:47 | INFO  | Setting property os_distro: cirros 2025-07-12 20:45:51.175108 | orchestrator | 2025-07-12 20:45:48 | INFO  | Setting property replace_frequency: never 2025-07-12 20:45:51.175119 | orchestrator | 2025-07-12 20:45:48 | INFO  | Setting property uuid_validity: none 2025-07-12 20:45:51.175139 | orchestrator | 2025-07-12 20:45:48 | INFO  | Setting property provided_until: none 2025-07-12 20:45:51.175149 | orchestrator | 2025-07-12 20:45:48 | INFO  | Setting property image_description: Cirros 2025-07-12 20:45:51.175160 | orchestrator | 2025-07-12 20:45:48 | INFO  | Setting property image_name: Cirros 2025-07-12 20:45:51.175171 | orchestrator | 2025-07-12 20:45:49 | INFO  | Setting property internal_version: 0.6.3 2025-07-12 20:45:51.175182 | orchestrator | 2025-07-12 20:45:49 | INFO  | Setting property image_original_user: cirros 2025-07-12 20:45:51.175193 | orchestrator | 2025-07-12 20:45:49 | INFO  | Setting property os_version: 0.6.3 2025-07-12 20:45:51.175204 | orchestrator | 2025-07-12 20:45:49 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2025-07-12 20:45:51.175280 | orchestrator | 2025-07-12 20:45:50 | INFO  | Setting property image_build_date: 2024-09-26 2025-07-12 20:45:51.175292 | orchestrator | 2025-07-12 20:45:50 | INFO  | Checking status of 'Cirros 0.6.3' 2025-07-12 20:45:51.175303 | orchestrator | 2025-07-12 20:45:50 | INFO  | Checking visibility of 'Cirros 0.6.3' 2025-07-12 20:45:51.175320 | orchestrator | 2025-07-12 20:45:50 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2025-07-12 20:45:51.457273 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2025-07-12 20:45:53.445411 | orchestrator | 2025-07-12 20:45:53 | INFO  | date: 2025-07-12 2025-07-12 20:45:53.445518 | orchestrator | 2025-07-12 20:45:53 | INFO  | image: octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 20:45:53.445543 | orchestrator | 2025-07-12 20:45:53 | INFO  | url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 20:45:53.445578 | orchestrator | 2025-07-12 20:45:53 | INFO  | checksum_url: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2.CHECKSUM 2025-07-12 20:45:53.473935 | orchestrator | 2025-07-12 20:45:53 | INFO  | checksum: c95855ae58dddb977df0d8e11b851fc66dd0abac9e608812e6020c0a95df8f26 2025-07-12 20:45:53.554148 | orchestrator | 2025-07-12 20:45:53 | INFO  | It takes a moment until task 183fd650-1a8d-4fa6-8020-09b205357bd0 (image-manager) has been started and output is visible here. 2025-07-12 20:46:53.277411 | orchestrator | /usr/local/lib/python3.13/site-packages/openstack_image_manager/__init__.py:5: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81. 2025-07-12 20:46:53.277566 | orchestrator | from pkg_resources import get_distribution, DistributionNotFound 2025-07-12 20:46:53.277585 | orchestrator | 2025-07-12 20:45:55 | INFO  | Processing image 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 20:46:53.277604 | orchestrator | 2025-07-12 20:45:55 | INFO  | Tested URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2: 200 2025-07-12 20:46:53.277659 | orchestrator | 2025-07-12 20:45:55 | INFO  | Importing image OpenStack Octavia Amphora 2025-07-12 2025-07-12 20:46:53.277674 | orchestrator | 2025-07-12 20:45:55 | INFO  | Importing from URL https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 20:46:53.277688 | orchestrator | 2025-07-12 20:45:55 | INFO  | Waiting for image to leave queued state... 2025-07-12 20:46:53.277728 | orchestrator | 2025-07-12 20:45:57 | INFO  | Waiting for import to complete... 2025-07-12 20:46:53.277740 | orchestrator | 2025-07-12 20:46:07 | INFO  | Waiting for import to complete... 2025-07-12 20:46:53.277751 | orchestrator | 2025-07-12 20:46:17 | INFO  | Waiting for import to complete... 2025-07-12 20:46:53.277762 | orchestrator | 2025-07-12 20:46:28 | INFO  | Waiting for import to complete... 2025-07-12 20:46:53.277774 | orchestrator | 2025-07-12 20:46:38 | INFO  | Waiting for import to complete... 2025-07-12 20:46:53.277785 | orchestrator | 2025-07-12 20:46:48 | INFO  | Import of 'OpenStack Octavia Amphora 2025-07-12' successfully completed, reloading images 2025-07-12 20:46:53.277797 | orchestrator | 2025-07-12 20:46:49 | INFO  | Checking parameters of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 20:46:53.277808 | orchestrator | 2025-07-12 20:46:49 | INFO  | Setting internal_version = 2025-07-12 2025-07-12 20:46:53.277820 | orchestrator | 2025-07-12 20:46:49 | INFO  | Setting image_original_user = ubuntu 2025-07-12 20:46:53.277831 | orchestrator | 2025-07-12 20:46:49 | INFO  | Adding tag amphora 2025-07-12 20:46:53.277842 | orchestrator | 2025-07-12 20:46:49 | INFO  | Adding tag os:ubuntu 2025-07-12 20:46:53.277853 | orchestrator | 2025-07-12 20:46:49 | INFO  | Setting property architecture: x86_64 2025-07-12 20:46:53.277864 | orchestrator | 2025-07-12 20:46:49 | INFO  | Setting property hw_disk_bus: scsi 2025-07-12 20:46:53.277878 | orchestrator | 2025-07-12 20:46:49 | INFO  | Setting property hw_rng_model: virtio 2025-07-12 20:46:53.277900 | orchestrator | 2025-07-12 20:46:50 | INFO  | Setting property hw_scsi_model: virtio-scsi 2025-07-12 20:46:53.277913 | orchestrator | 2025-07-12 20:46:50 | INFO  | Setting property hw_watchdog_action: reset 2025-07-12 20:46:53.277926 | orchestrator | 2025-07-12 20:46:50 | INFO  | Setting property hypervisor_type: qemu 2025-07-12 20:46:53.277939 | orchestrator | 2025-07-12 20:46:50 | INFO  | Setting property os_distro: ubuntu 2025-07-12 20:46:53.277951 | orchestrator | 2025-07-12 20:46:50 | INFO  | Setting property replace_frequency: quarterly 2025-07-12 20:46:53.277963 | orchestrator | 2025-07-12 20:46:51 | INFO  | Setting property uuid_validity: last-1 2025-07-12 20:46:53.277975 | orchestrator | 2025-07-12 20:46:51 | INFO  | Setting property provided_until: none 2025-07-12 20:46:53.277988 | orchestrator | 2025-07-12 20:46:51 | INFO  | Setting property image_description: OpenStack Octavia Amphora 2025-07-12 20:46:53.278001 | orchestrator | 2025-07-12 20:46:51 | INFO  | Setting property image_name: OpenStack Octavia Amphora 2025-07-12 20:46:53.278059 | orchestrator | 2025-07-12 20:46:51 | INFO  | Setting property internal_version: 2025-07-12 2025-07-12 20:46:53.278074 | orchestrator | 2025-07-12 20:46:52 | INFO  | Setting property image_original_user: ubuntu 2025-07-12 20:46:53.278088 | orchestrator | 2025-07-12 20:46:52 | INFO  | Setting property os_version: 2025-07-12 2025-07-12 20:46:53.278101 | orchestrator | 2025-07-12 20:46:52 | INFO  | Setting property image_source: https://swift.services.a.regiocloud.tech/swift/v1/AUTH_b182637428444b9aa302bb8d5a5a418c/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20250712.qcow2 2025-07-12 20:46:53.278131 | orchestrator | 2025-07-12 20:46:52 | INFO  | Setting property image_build_date: 2025-07-12 2025-07-12 20:46:53.278144 | orchestrator | 2025-07-12 20:46:52 | INFO  | Checking status of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 20:46:53.278156 | orchestrator | 2025-07-12 20:46:52 | INFO  | Checking visibility of 'OpenStack Octavia Amphora 2025-07-12' 2025-07-12 20:46:53.278212 | orchestrator | 2025-07-12 20:46:53 | INFO  | Processing image 'Cirros 0.6.3' (removal candidate) 2025-07-12 20:46:53.278226 | orchestrator | 2025-07-12 20:46:53 | WARNING  | No image definition found for 'Cirros 0.6.3', image will be ignored 2025-07-12 20:46:53.278241 | orchestrator | 2025-07-12 20:46:53 | INFO  | Processing image 'Cirros 0.6.2' (removal candidate) 2025-07-12 20:46:53.278252 | orchestrator | 2025-07-12 20:46:53 | WARNING  | No image definition found for 'Cirros 0.6.2', image will be ignored 2025-07-12 20:46:53.684878 | orchestrator | ok: Runtime: 0:03:11.290428 2025-07-12 20:46:53.698231 | 2025-07-12 20:46:53.698367 | TASK [Run checks] 2025-07-12 20:46:54.406818 | orchestrator | + set -e 2025-07-12 20:46:54.406962 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 20:46:54.406974 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 20:46:54.406984 | orchestrator | ++ INTERACTIVE=false 2025-07-12 20:46:54.406991 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 20:46:54.406997 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 20:46:54.407004 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 20:46:54.408573 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 20:46:54.415184 | orchestrator | 2025-07-12 20:46:54.415233 | orchestrator | # CHECK 2025-07-12 20:46:54.415241 | orchestrator | 2025-07-12 20:46:54.415248 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 20:46:54.415258 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 20:46:54.415290 | orchestrator | + echo 2025-07-12 20:46:54.415297 | orchestrator | + echo '# CHECK' 2025-07-12 20:46:54.415304 | orchestrator | + echo 2025-07-12 20:46:54.415313 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 20:46:54.416126 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 20:46:54.485063 | orchestrator | 2025-07-12 20:46:54.485169 | orchestrator | ## Containers @ testbed-manager 2025-07-12 20:46:54.485185 | orchestrator | 2025-07-12 20:46:54.485200 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 20:46:54.485211 | orchestrator | + echo 2025-07-12 20:46:54.485223 | orchestrator | + echo '## Containers @ testbed-manager' 2025-07-12 20:46:54.485235 | orchestrator | + echo 2025-07-12 20:46:54.485246 | orchestrator | + osism container testbed-manager ps 2025-07-12 20:46:56.735664 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 20:46:56.735789 | orchestrator | 6659018a350e registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.25.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_blackbox_exporter 2025-07-12 20:46:56.735820 | orchestrator | 20924f886ca9 registry.osism.tech/kolla/release/prometheus-alertmanager:0.28.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_alertmanager 2025-07-12 20:46:56.735846 | orchestrator | 02c55856299f registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-12 20:46:56.735862 | orchestrator | 41573f416ca9 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-07-12 20:46:56.735878 | orchestrator | 1d39252bce95 registry.osism.tech/kolla/release/prometheus-v2-server:2.55.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_server 2025-07-12 20:46:56.735891 | orchestrator | e9125fc9dbd9 registry.osism.tech/osism/cephclient:18.2.7 "/usr/bin/dumb-init …" 18 minutes ago Up 18 minutes cephclient 2025-07-12 20:46:56.735905 | orchestrator | b7074844a368 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes cron 2025-07-12 20:46:56.735915 | orchestrator | 5ae0da6f3944 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-07-12 20:46:56.735924 | orchestrator | 43c452c13e9e registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes fluentd 2025-07-12 20:46:56.735957 | orchestrator | a3f1a9c0f6c9 phpmyadmin/phpmyadmin:5.2 "/docker-entrypoint.…" 32 minutes ago Up 31 minutes (healthy) 80/tcp phpmyadmin 2025-07-12 20:46:56.735966 | orchestrator | 48fd633e060b registry.osism.tech/osism/openstackclient:2024.2 "/usr/bin/dumb-init …" 33 minutes ago Up 32 minutes openstackclient 2025-07-12 20:46:56.735976 | orchestrator | 9d6cc4d3d9a5 registry.osism.tech/osism/homer:v25.05.2 "/bin/sh /entrypoint…" 33 minutes ago Up 32 minutes (healthy) 8080/tcp homer 2025-07-12 20:46:56.735985 | orchestrator | d4edee4295c9 registry.osism.tech/dockerhub/ubuntu/squid:6.1-23.10_beta "entrypoint.sh -f /e…" 52 minutes ago Up 51 minutes (healthy) 192.168.16.5:3128->3128/tcp squid 2025-07-12 20:46:56.735999 | orchestrator | 634b7d2617e2 registry.osism.tech/osism/inventory-reconciler:0.20250711.0 "/sbin/tini -- /entr…" 56 minutes ago Up 39 minutes (healthy) manager-inventory_reconciler-1 2025-07-12 20:46:56.736029 | orchestrator | 2068c6eea0bc registry.osism.tech/osism/kolla-ansible:0.20250711.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) kolla-ansible 2025-07-12 20:46:56.736038 | orchestrator | 1f602e8913f9 registry.osism.tech/osism/osism-ansible:0.20250711.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) osism-ansible 2025-07-12 20:46:56.736047 | orchestrator | c2896eaf0bfa registry.osism.tech/osism/osism-kubernetes:0.20250711.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) osism-kubernetes 2025-07-12 20:46:56.736056 | orchestrator | 48d1808b8529 registry.osism.tech/osism/ceph-ansible:0.20250711.0 "/entrypoint.sh osis…" 56 minutes ago Up 39 minutes (healthy) ceph-ansible 2025-07-12 20:46:56.736065 | orchestrator | 8da8902ed9c1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" 56 minutes ago Up 40 minutes (healthy) 8000/tcp manager-ara-server-1 2025-07-12 20:46:56.736074 | orchestrator | 662ada33198a registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 56 minutes ago Up 40 minutes (healthy) 192.168.16.5:8000->8000/tcp manager-api-1 2025-07-12 20:46:56.736083 | orchestrator | 0da9e90869d6 registry.osism.tech/dockerhub/library/mariadb:11.8.2 "docker-entrypoint.s…" 56 minutes ago Up 40 minutes (healthy) 3306/tcp manager-mariadb-1 2025-07-12 20:46:56.736092 | orchestrator | 85ad115a0ae0 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 56 minutes ago Up 40 minutes (healthy) manager-beat-1 2025-07-12 20:46:56.736101 | orchestrator | 23736b341bd6 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- sleep…" 56 minutes ago Up 40 minutes (healthy) osismclient 2025-07-12 20:46:56.736117 | orchestrator | 8ca8fc98e300 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 56 minutes ago Up 40 minutes (healthy) manager-openstack-1 2025-07-12 20:46:56.736126 | orchestrator | d85af548f826 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 56 minutes ago Up 40 minutes (healthy) manager-listener-1 2025-07-12 20:46:56.736135 | orchestrator | bdd981286362 registry.osism.tech/osism/osism:0.20250709.0 "/sbin/tini -- osism…" 56 minutes ago Up 40 minutes (healthy) manager-flower-1 2025-07-12 20:46:56.736145 | orchestrator | ccc6ea4b3f99 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" 56 minutes ago Up 40 minutes (healthy) 6379/tcp manager-redis-1 2025-07-12 20:46:56.736154 | orchestrator | b1a5810a080b registry.osism.tech/dockerhub/library/traefik:v3.4.3 "/entrypoint.sh trae…" 57 minutes ago Up 57 minutes (healthy) 192.168.16.5:80->80/tcp, 192.168.16.5:443->443/tcp, 192.168.16.5:8122->8080/tcp traefik 2025-07-12 20:46:57.029698 | orchestrator | 2025-07-12 20:46:57.029829 | orchestrator | ## Images @ testbed-manager 2025-07-12 20:46:57.029852 | orchestrator | 2025-07-12 20:46:57.029872 | orchestrator | + echo 2025-07-12 20:46:57.029892 | orchestrator | + echo '## Images @ testbed-manager' 2025-07-12 20:46:57.029912 | orchestrator | + echo 2025-07-12 20:46:57.029930 | orchestrator | + osism container testbed-manager images 2025-07-12 20:46:59.190360 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 20:46:59.190496 | orchestrator | registry.osism.tech/osism/kolla-ansible 0.20250711.0 fcbac8373342 10 hours ago 571MB 2025-07-12 20:46:59.190526 | orchestrator | registry.osism.tech/osism/homer v25.05.2 d2fcb41febbc 17 hours ago 11.5MB 2025-07-12 20:46:59.190544 | orchestrator | registry.osism.tech/osism/openstackclient 2024.2 751f5a3be689 17 hours ago 234MB 2025-07-12 20:46:59.190562 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 24 hours ago 628MB 2025-07-12 20:46:59.190610 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 24 hours ago 746MB 2025-07-12 20:46:59.190631 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 24 hours ago 318MB 2025-07-12 20:46:59.190649 | orchestrator | registry.osism.tech/kolla/release/prometheus-v2-server 2.55.1.20250711 cb02c47a5187 24 hours ago 891MB 2025-07-12 20:46:59.190667 | orchestrator | registry.osism.tech/kolla/release/prometheus-blackbox-exporter 0.25.0.20250711 0ac8facfe451 24 hours ago 360MB 2025-07-12 20:46:59.190686 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 24 hours ago 410MB 2025-07-12 20:46:59.190703 | orchestrator | registry.osism.tech/kolla/release/prometheus-alertmanager 0.28.0.20250711 6c4eef6335f5 24 hours ago 456MB 2025-07-12 20:46:59.190721 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 24 hours ago 358MB 2025-07-12 20:46:59.190757 | orchestrator | registry.osism.tech/osism/osism-ansible 0.20250711.0 7b0f9e78b4e4 25 hours ago 575MB 2025-07-12 20:46:59.190789 | orchestrator | registry.osism.tech/osism/ceph-ansible 0.20250711.0 f677f8f8094b 25 hours ago 535MB 2025-07-12 20:46:59.190839 | orchestrator | registry.osism.tech/osism/inventory-reconciler 0.20250711.0 8fcfa643b744 25 hours ago 308MB 2025-07-12 20:46:59.190860 | orchestrator | registry.osism.tech/osism/osism-kubernetes 0.20250711.0 267f92fc46f6 25 hours ago 1.21GB 2025-07-12 20:46:59.190877 | orchestrator | registry.osism.tech/osism/osism 0.20250709.0 ccd699d89870 3 days ago 310MB 2025-07-12 20:46:59.190895 | orchestrator | registry.osism.tech/dockerhub/library/redis 7.4.5-alpine 555db38b5b92 6 days ago 41.4MB 2025-07-12 20:46:59.190911 | orchestrator | registry.osism.tech/dockerhub/library/traefik v3.4.3 4113453efcb3 2 weeks ago 226MB 2025-07-12 20:46:59.190929 | orchestrator | registry.osism.tech/dockerhub/library/mariadb 11.8.2 7fb85a4198e9 4 weeks ago 329MB 2025-07-12 20:46:59.190947 | orchestrator | registry.osism.tech/osism/cephclient 18.2.7 ae977aa79826 2 months ago 453MB 2025-07-12 20:46:59.190965 | orchestrator | phpmyadmin/phpmyadmin 5.2 0276a66ce322 5 months ago 571MB 2025-07-12 20:46:59.190982 | orchestrator | registry.osism.tech/osism/ara-server 1.7.2 bb44122eb176 10 months ago 300MB 2025-07-12 20:46:59.190999 | orchestrator | registry.osism.tech/dockerhub/ubuntu/squid 6.1-23.10_beta 34b6bbbcf74b 13 months ago 146MB 2025-07-12 20:46:59.500876 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 20:46:59.501127 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 20:46:59.552318 | orchestrator | 2025-07-12 20:46:59.552425 | orchestrator | ## Containers @ testbed-node-0 2025-07-12 20:46:59.552441 | orchestrator | 2025-07-12 20:46:59.552454 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 20:46:59.552466 | orchestrator | + echo 2025-07-12 20:46:59.552478 | orchestrator | + echo '## Containers @ testbed-node-0' 2025-07-12 20:46:59.552491 | orchestrator | + echo 2025-07-12 20:46:59.552503 | orchestrator | + osism container testbed-node-0 ps 2025-07-12 20:47:01.948520 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 20:47:01.948634 | orchestrator | 54a137d89528 registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-12 20:47:01.948652 | orchestrator | e0fb87ea63f4 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-12 20:47:01.948665 | orchestrator | 6bbf7435c500 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-12 20:47:01.948676 | orchestrator | 60672d92a3bc registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-12 20:47:01.948688 | orchestrator | 6c6932812e6b registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-12 20:47:01.948699 | orchestrator | 3f7742284375 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-07-12 20:47:01.948710 | orchestrator | a17ad703dc34 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-07-12 20:47:01.948742 | orchestrator | 826c784c40e3 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes grafana 2025-07-12 20:47:01.948753 | orchestrator | 74307b26f31e registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-12 20:47:01.948786 | orchestrator | 749688313e89 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-07-12 20:47:01.948798 | orchestrator | b9bd9f54d600 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-07-12 20:47:01.948809 | orchestrator | 6442091d1af9 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-07-12 20:47:01.948820 | orchestrator | e56c92714471 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-07-12 20:47:01.948831 | orchestrator | 4e2f8148c6c3 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-12 20:47:01.948842 | orchestrator | 732a1ecb0b5c registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-07-12 20:47:01.948853 | orchestrator | 278fbc6caf65 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-12 20:47:01.948864 | orchestrator | 5ac2ebce783f registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 10 minutes (healthy) designate_api 2025-07-12 20:47:01.948875 | orchestrator | a086c0b67d40 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-07-12 20:47:01.948886 | orchestrator | 5786af6f8d05 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-07-12 20:47:01.948917 | orchestrator | ba048357c862 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-07-12 20:47:01.948928 | orchestrator | b9e2c911e0df registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-07-12 20:47:01.948939 | orchestrator | f5f74c731927 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-07-12 20:47:01.948950 | orchestrator | 59828cfb6771 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 20:47:01.948961 | orchestrator | 1625d674400d registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-07-12 20:47:01.948975 | orchestrator | 166c11e2cefb registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-07-12 20:47:01.948985 | orchestrator | 9b33162fb375 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 15 minutes ago Up 14 minutes prometheus_cadvisor 2025-07-12 20:47:01.949002 | orchestrator | 4f4b684f2e12 registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-07-12 20:47:01.949021 | orchestrator | 33dd845ad6f1 registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-07-12 20:47:01.949032 | orchestrator | 6aea9f846f27 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-07-12 20:47:01.949043 | orchestrator | 7a5c82d162bd registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-07-12 20:47:01.949060 | orchestrator | 71926515ff3a registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-07-12 20:47:01.949071 | orchestrator | 7cdc32283dff registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-0 2025-07-12 20:47:01.949082 | orchestrator | 0f6c6214fc93 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 18 minutes ago Up 18 minutes (healthy) keystone 2025-07-12 20:47:01.949093 | orchestrator | f1179c244f45 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-12 20:47:01.949104 | orchestrator | 85bd9d0a85fb registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-12 20:47:01.949115 | orchestrator | 4cdbfe336bf4 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-12 20:47:01.949125 | orchestrator | 0b6e2bfbe289 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-07-12 20:47:01.949141 | orchestrator | 5302ec90b656 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch_dashboards 2025-07-12 20:47:01.949153 | orchestrator | 6562e2b64338 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 23 minutes ago Up 23 minutes (healthy) opensearch 2025-07-12 20:47:01.949163 | orchestrator | eb8b6703c12e registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 23 minutes ago Up 23 minutes ceph-crash-testbed-node-0 2025-07-12 20:47:01.949182 | orchestrator | 13e9bccd0a8c registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-12 20:47:01.949193 | orchestrator | d13cbad47d5c registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-12 20:47:01.949204 | orchestrator | af04be10caf8 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-12 20:47:01.949215 | orchestrator | 921f02f641b6 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-07-12 20:47:01.949226 | orchestrator | 3cb9e251cd14 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-07-12 20:47:01.949244 | orchestrator | 30865e4a9196 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-07-12 20:47:01.949256 | orchestrator | 699057fb7dc8 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-12 20:47:01.949308 | orchestrator | 56d7f597ab18 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-0 2025-07-12 20:47:01.949322 | orchestrator | 755723884d49 registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) rabbitmq 2025-07-12 20:47:01.949333 | orchestrator | 790857edc924 registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 29 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-12 20:47:01.949345 | orchestrator | 5e38f2788ff7 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-07-12 20:47:01.949356 | orchestrator | a6252396cd2f registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-12 20:47:01.949367 | orchestrator | c1256627880e registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-12 20:47:01.949378 | orchestrator | 0332676552e6 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-12 20:47:01.949389 | orchestrator | 8cebfb49e891 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 31 minutes ago Up 30 minutes cron 2025-07-12 20:47:01.949400 | orchestrator | a31eaed10782 registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-07-12 20:47:01.949411 | orchestrator | 0808bf489c6b registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-12 20:47:02.243101 | orchestrator | 2025-07-12 20:47:02.243207 | orchestrator | ## Images @ testbed-node-0 2025-07-12 20:47:02.243224 | orchestrator | 2025-07-12 20:47:02.243236 | orchestrator | + echo 2025-07-12 20:47:02.243249 | orchestrator | + echo '## Images @ testbed-node-0' 2025-07-12 20:47:02.243261 | orchestrator | + echo 2025-07-12 20:47:02.243316 | orchestrator | + osism container testbed-node-0 images 2025-07-12 20:47:04.480474 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 20:47:04.480603 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 24 hours ago 628MB 2025-07-12 20:47:04.480629 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 24 hours ago 329MB 2025-07-12 20:47:04.480650 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 24 hours ago 326MB 2025-07-12 20:47:04.480672 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 24 hours ago 1.59GB 2025-07-12 20:47:04.480693 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 24 hours ago 1.55GB 2025-07-12 20:47:04.480713 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 24 hours ago 417MB 2025-07-12 20:47:04.480768 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 24 hours ago 318MB 2025-07-12 20:47:04.480823 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 24 hours ago 746MB 2025-07-12 20:47:04.480844 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 24 hours ago 375MB 2025-07-12 20:47:04.480865 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 24 hours ago 1.01GB 2025-07-12 20:47:04.480907 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 24 hours ago 318MB 2025-07-12 20:47:04.480928 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 24 hours ago 361MB 2025-07-12 20:47:04.480949 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 24 hours ago 361MB 2025-07-12 20:47:04.480970 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 24 hours ago 1.21GB 2025-07-12 20:47:04.480991 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 24 hours ago 353MB 2025-07-12 20:47:04.481011 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 24 hours ago 410MB 2025-07-12 20:47:04.481031 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 24 hours ago 344MB 2025-07-12 20:47:04.481051 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 24 hours ago 358MB 2025-07-12 20:47:04.481072 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 24 hours ago 351MB 2025-07-12 20:47:04.481092 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 24 hours ago 324MB 2025-07-12 20:47:04.481112 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 24 hours ago 324MB 2025-07-12 20:47:04.481132 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 24 hours ago 590MB 2025-07-12 20:47:04.481152 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 24 hours ago 946MB 2025-07-12 20:47:04.481172 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 24 hours ago 947MB 2025-07-12 20:47:04.481192 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 24 hours ago 947MB 2025-07-12 20:47:04.481212 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 24 hours ago 946MB 2025-07-12 20:47:04.481232 | orchestrator | registry.osism.tech/kolla/release/ceilometer-central 23.0.0.20250711 05a4552273f6 24 hours ago 1.04GB 2025-07-12 20:47:04.481252 | orchestrator | registry.osism.tech/kolla/release/ceilometer-notification 23.0.0.20250711 41f8c34132c7 24 hours ago 1.04GB 2025-07-12 20:47:04.481294 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 24 hours ago 1.1GB 2025-07-12 20:47:04.481315 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 24 hours ago 1.1GB 2025-07-12 20:47:04.481335 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 24 hours ago 1.12GB 2025-07-12 20:47:04.481382 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 24 hours ago 1.1GB 2025-07-12 20:47:04.481416 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 24 hours ago 1.12GB 2025-07-12 20:47:04.481436 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 24 hours ago 1.15GB 2025-07-12 20:47:04.481456 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 24 hours ago 1.04GB 2025-07-12 20:47:04.481484 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 24 hours ago 1.06GB 2025-07-12 20:47:04.481504 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 24 hours ago 1.06GB 2025-07-12 20:47:04.481524 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 24 hours ago 1.06GB 2025-07-12 20:47:04.481544 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 24 hours ago 1.41GB 2025-07-12 20:47:04.481564 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 24 hours ago 1.41GB 2025-07-12 20:47:04.481584 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 24 hours ago 1.29GB 2025-07-12 20:47:04.481604 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 24 hours ago 1.42GB 2025-07-12 20:47:04.481623 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 24 hours ago 1.29GB 2025-07-12 20:47:04.481643 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 24 hours ago 1.29GB 2025-07-12 20:47:04.481663 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 24 hours ago 1.2GB 2025-07-12 20:47:04.481683 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 24 hours ago 1.31GB 2025-07-12 20:47:04.481703 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 24 hours ago 1.05GB 2025-07-12 20:47:04.481723 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 24 hours ago 1.05GB 2025-07-12 20:47:04.481743 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 24 hours ago 1.05GB 2025-07-12 20:47:04.481763 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 24 hours ago 1.06GB 2025-07-12 20:47:04.481782 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 24 hours ago 1.06GB 2025-07-12 20:47:04.481802 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 24 hours ago 1.05GB 2025-07-12 20:47:04.481821 | orchestrator | registry.osism.tech/kolla/release/skyline-console 5.0.1.20250711 f2e37439c6b7 24 hours ago 1.11GB 2025-07-12 20:47:04.481841 | orchestrator | registry.osism.tech/kolla/release/skyline-apiserver 5.0.1.20250711 b3d19c53d4de 24 hours ago 1.11GB 2025-07-12 20:47:04.481861 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 24 hours ago 1.11GB 2025-07-12 20:47:04.481880 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 24 hours ago 1.13GB 2025-07-12 20:47:04.481898 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 24 hours ago 1.11GB 2025-07-12 20:47:04.481918 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 24 hours ago 1.24GB 2025-07-12 20:47:04.481936 | orchestrator | registry.osism.tech/kolla/release/aodh-evaluator 19.0.0.20250711 c26d685bbc69 24 hours ago 1.04GB 2025-07-12 20:47:04.481963 | orchestrator | registry.osism.tech/kolla/release/aodh-listener 19.0.0.20250711 55a7448b63ad 24 hours ago 1.04GB 2025-07-12 20:47:04.481975 | orchestrator | registry.osism.tech/kolla/release/aodh-api 19.0.0.20250711 b8a4d60cb725 24 hours ago 1.04GB 2025-07-12 20:47:04.481986 | orchestrator | registry.osism.tech/kolla/release/aodh-notifier 19.0.0.20250711 c0822bfcb81c 24 hours ago 1.04GB 2025-07-12 20:47:04.482002 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 2 months ago 1.27GB 2025-07-12 20:47:04.796799 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 20:47:04.797500 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 20:47:04.851020 | orchestrator | 2025-07-12 20:47:04.851123 | orchestrator | ## Containers @ testbed-node-1 2025-07-12 20:47:04.851136 | orchestrator | 2025-07-12 20:47:04.851147 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 20:47:04.851157 | orchestrator | + echo 2025-07-12 20:47:04.851168 | orchestrator | + echo '## Containers @ testbed-node-1' 2025-07-12 20:47:04.851180 | orchestrator | + echo 2025-07-12 20:47:04.851190 | orchestrator | + osism container testbed-node-1 ps 2025-07-12 20:47:07.196094 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 20:47:07.196202 | orchestrator | b2ba27a3b51b registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-12 20:47:07.196219 | orchestrator | 9daaa190048a registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-12 20:47:07.196231 | orchestrator | 74ffaf2c4782 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-12 20:47:07.196243 | orchestrator | 4391d180edbc registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes octavia_driver_agent 2025-07-12 20:47:07.196254 | orchestrator | d0e4863212de registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-12 20:47:07.196265 | orchestrator | 5a41e5975104 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-07-12 20:47:07.196322 | orchestrator | 56678893b248 registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-07-12 20:47:07.196334 | orchestrator | b61cfa636371 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-07-12 20:47:07.196345 | orchestrator | fba0823da616 registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) placement_api 2025-07-12 20:47:07.196356 | orchestrator | 08270f1b7445 registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 9 minutes ago Up 9 minutes (healthy) neutron_server 2025-07-12 20:47:07.196367 | orchestrator | 7da93f2f7492 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-07-12 20:47:07.196379 | orchestrator | 57f1ad972dc7 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-07-12 20:47:07.196414 | orchestrator | 25fa76fcd802 registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-07-12 20:47:07.196426 | orchestrator | a270a22a2ca2 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) nova_conductor 2025-07-12 20:47:07.196437 | orchestrator | 488893651e33 registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-07-12 20:47:07.196448 | orchestrator | fc90cb665b76 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-12 20:47:07.196459 | orchestrator | 75149b1d3f9f registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-07-12 20:47:07.196470 | orchestrator | 1c31e550e87b registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-07-12 20:47:07.196505 | orchestrator | f3bb921654a1 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-07-12 20:47:07.196535 | orchestrator | 40814179c1a1 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-07-12 20:47:07.196547 | orchestrator | 1645f177a6ca registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-07-12 20:47:07.196558 | orchestrator | 95ef9da5bbf2 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-07-12 20:47:07.196568 | orchestrator | e4db2707c019 registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 20:47:07.196580 | orchestrator | 27c8a086166c registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-07-12 20:47:07.196592 | orchestrator | e1b307942179 registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-07-12 20:47:07.196604 | orchestrator | 5340d9ee7719 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-07-12 20:47:07.196615 | orchestrator | cd0a03e1b96e registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-07-12 20:47:07.196626 | orchestrator | 70f48be13902 registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-07-12 20:47:07.196637 | orchestrator | 94cd5b95f89f registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-07-12 20:47:07.196648 | orchestrator | 5bba3924dbac registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-07-12 20:47:07.196682 | orchestrator | f4f48af86a88 registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-07-12 20:47:07.196693 | orchestrator | 93246e4f2546 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 16 minutes ago Up 16 minutes ceph-mgr-testbed-node-1 2025-07-12 20:47:07.196704 | orchestrator | 0108df9db464 registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 18 minutes (healthy) keystone 2025-07-12 20:47:07.196715 | orchestrator | e06b4fe1e0b1 registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-12 20:47:07.196726 | orchestrator | 00509da822d6 registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-12 20:47:07.196737 | orchestrator | 3404cc44550c registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-12 20:47:07.196748 | orchestrator | 54b9dc9512a5 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-12 20:47:07.196759 | orchestrator | c120e4b013f9 registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 22 minutes ago Up 22 minutes (healthy) mariadb 2025-07-12 20:47:07.196770 | orchestrator | aaeccb73e48d registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-07-12 20:47:07.196781 | orchestrator | 1119fbc1e6de registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-1 2025-07-12 20:47:07.196800 | orchestrator | 912621846fc6 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-12 20:47:07.196818 | orchestrator | 354b38816fac registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-12 20:47:07.196830 | orchestrator | fcaa6b3b82a9 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-12 20:47:07.196841 | orchestrator | 9656c5d68ce6 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-07-12 20:47:07.196852 | orchestrator | 7ecd2c2b759d registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-07-12 20:47:07.196863 | orchestrator | 3f885d6f90e6 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-07-12 20:47:07.196874 | orchestrator | e3e5a2b8c40e registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-12 20:47:07.196886 | orchestrator | 64092371712f registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-07-12 20:47:07.196897 | orchestrator | f2ff6bfddcef registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-1 2025-07-12 20:47:07.196916 | orchestrator | a93be6b87cba registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-12 20:47:07.196927 | orchestrator | 94973962580d registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-07-12 20:47:07.196938 | orchestrator | c47092187f6b registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-12 20:47:07.196949 | orchestrator | 33faf6195ffc registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-12 20:47:07.196960 | orchestrator | ec70d80d0d96 registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-12 20:47:07.196971 | orchestrator | 53fba2d004ed registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-07-12 20:47:07.196981 | orchestrator | da030e963b5a registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-07-12 20:47:07.196992 | orchestrator | 4b6e539a2cea registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-12 20:47:07.478572 | orchestrator | 2025-07-12 20:47:07.478673 | orchestrator | ## Images @ testbed-node-1 2025-07-12 20:47:07.478688 | orchestrator | 2025-07-12 20:47:07.478700 | orchestrator | + echo 2025-07-12 20:47:07.478712 | orchestrator | + echo '## Images @ testbed-node-1' 2025-07-12 20:47:07.478724 | orchestrator | + echo 2025-07-12 20:47:07.478735 | orchestrator | + osism container testbed-node-1 images 2025-07-12 20:47:09.727859 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 20:47:09.727994 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 24 hours ago 628MB 2025-07-12 20:47:09.728019 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 24 hours ago 329MB 2025-07-12 20:47:09.728037 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 24 hours ago 326MB 2025-07-12 20:47:09.728053 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 24 hours ago 1.59GB 2025-07-12 20:47:09.728070 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 24 hours ago 1.55GB 2025-07-12 20:47:09.728086 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 24 hours ago 417MB 2025-07-12 20:47:09.728101 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 24 hours ago 318MB 2025-07-12 20:47:09.728119 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 24 hours ago 375MB 2025-07-12 20:47:09.728136 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 24 hours ago 746MB 2025-07-12 20:47:09.728152 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 24 hours ago 1.01GB 2025-07-12 20:47:09.728168 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 24 hours ago 318MB 2025-07-12 20:47:09.728303 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 24 hours ago 361MB 2025-07-12 20:47:09.728328 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 24 hours ago 361MB 2025-07-12 20:47:09.728344 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 24 hours ago 1.21GB 2025-07-12 20:47:09.728383 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 24 hours ago 353MB 2025-07-12 20:47:09.728401 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 24 hours ago 410MB 2025-07-12 20:47:09.728674 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 24 hours ago 344MB 2025-07-12 20:47:09.728705 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 24 hours ago 358MB 2025-07-12 20:47:09.728720 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 24 hours ago 324MB 2025-07-12 20:47:09.728732 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 24 hours ago 351MB 2025-07-12 20:47:09.728744 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 24 hours ago 324MB 2025-07-12 20:47:09.728756 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 24 hours ago 590MB 2025-07-12 20:47:09.728767 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 24 hours ago 946MB 2025-07-12 20:47:09.728780 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 24 hours ago 947MB 2025-07-12 20:47:09.728797 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 24 hours ago 947MB 2025-07-12 20:47:09.728826 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 24 hours ago 946MB 2025-07-12 20:47:09.728845 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 24 hours ago 1.1GB 2025-07-12 20:47:09.728864 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 24 hours ago 1.1GB 2025-07-12 20:47:09.728884 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 24 hours ago 1.12GB 2025-07-12 20:47:09.728902 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 24 hours ago 1.1GB 2025-07-12 20:47:09.728920 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 24 hours ago 1.12GB 2025-07-12 20:47:09.728938 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 24 hours ago 1.15GB 2025-07-12 20:47:09.728953 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 24 hours ago 1.04GB 2025-07-12 20:47:09.728971 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 24 hours ago 1.06GB 2025-07-12 20:47:09.728986 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 24 hours ago 1.06GB 2025-07-12 20:47:09.729003 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 24 hours ago 1.06GB 2025-07-12 20:47:09.729020 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 24 hours ago 1.41GB 2025-07-12 20:47:09.729057 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 24 hours ago 1.41GB 2025-07-12 20:47:09.729074 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 24 hours ago 1.29GB 2025-07-12 20:47:09.729093 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 24 hours ago 1.42GB 2025-07-12 20:47:09.729112 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 24 hours ago 1.29GB 2025-07-12 20:47:09.729129 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 24 hours ago 1.29GB 2025-07-12 20:47:09.729148 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 24 hours ago 1.2GB 2025-07-12 20:47:09.729165 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 24 hours ago 1.31GB 2025-07-12 20:47:09.729182 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 24 hours ago 1.05GB 2025-07-12 20:47:09.729200 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 24 hours ago 1.05GB 2025-07-12 20:47:09.729217 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 24 hours ago 1.05GB 2025-07-12 20:47:09.729236 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 24 hours ago 1.06GB 2025-07-12 20:47:09.729253 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 24 hours ago 1.06GB 2025-07-12 20:47:09.729322 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 24 hours ago 1.05GB 2025-07-12 20:47:09.729357 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 24 hours ago 1.11GB 2025-07-12 20:47:09.729369 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 24 hours ago 1.13GB 2025-07-12 20:47:09.729380 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 24 hours ago 1.11GB 2025-07-12 20:47:09.729391 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 24 hours ago 1.24GB 2025-07-12 20:47:09.729403 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 2 months ago 1.27GB 2025-07-12 20:47:10.026574 | orchestrator | + for node in testbed-manager testbed-node-0 testbed-node-1 testbed-node-2 2025-07-12 20:47:10.026676 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 20:47:10.073788 | orchestrator | 2025-07-12 20:47:10.073886 | orchestrator | ## Containers @ testbed-node-2 2025-07-12 20:47:10.073900 | orchestrator | 2025-07-12 20:47:10.073912 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 20:47:10.073923 | orchestrator | + echo 2025-07-12 20:47:10.073935 | orchestrator | + echo '## Containers @ testbed-node-2' 2025-07-12 20:47:10.073947 | orchestrator | + echo 2025-07-12 20:47:10.073958 | orchestrator | + osism container testbed-node-2 ps 2025-07-12 20:47:12.452068 | orchestrator | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 2025-07-12 20:47:12.452231 | orchestrator | bd9dc2099a8c registry.osism.tech/kolla/release/octavia-worker:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_worker 2025-07-12 20:47:12.452251 | orchestrator | a0471e374097 registry.osism.tech/kolla/release/octavia-housekeeping:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_housekeeping 2025-07-12 20:47:12.452263 | orchestrator | 025f1fa194d1 registry.osism.tech/kolla/release/octavia-health-manager:15.0.1.20250711 "dumb-init --single-…" 4 minutes ago Up 4 minutes (healthy) octavia_health_manager 2025-07-12 20:47:12.452392 | orchestrator | 61c42ee35a6c registry.osism.tech/kolla/release/octavia-driver-agent:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes octavia_driver_agent 2025-07-12 20:47:12.452407 | orchestrator | 6e7b2714b7a7 registry.osism.tech/kolla/release/octavia-api:15.0.1.20250711 "dumb-init --single-…" 5 minutes ago Up 5 minutes (healthy) octavia_api 2025-07-12 20:47:12.452418 | orchestrator | 5a47ff2c45f0 registry.osism.tech/kolla/release/grafana:12.0.2.20250711 "dumb-init --single-…" 7 minutes ago Up 7 minutes grafana 2025-07-12 20:47:12.452429 | orchestrator | aa7e8e3125fc registry.osism.tech/kolla/release/magnum-conductor:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_conductor 2025-07-12 20:47:12.452440 | orchestrator | 3769de7b8132 registry.osism.tech/kolla/release/magnum-api:19.0.1.20250711 "dumb-init --single-…" 8 minutes ago Up 8 minutes (healthy) magnum_api 2025-07-12 20:47:12.452451 | orchestrator | 15a87ec04eba registry.osism.tech/kolla/release/placement-api:12.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 9 minutes (healthy) placement_api 2025-07-12 20:47:12.452462 | orchestrator | ed1c5a97374f registry.osism.tech/kolla/release/neutron-server:25.2.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) neutron_server 2025-07-12 20:47:12.452473 | orchestrator | 3fd5ca136827 registry.osism.tech/kolla/release/designate-worker:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_worker 2025-07-12 20:47:12.452484 | orchestrator | 30e5acdb19a0 registry.osism.tech/kolla/release/nova-novncproxy:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_novncproxy 2025-07-12 20:47:12.452495 | orchestrator | 8255d861abab registry.osism.tech/kolla/release/designate-mdns:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_mdns 2025-07-12 20:47:12.452506 | orchestrator | 8d5f1d3e03e2 registry.osism.tech/kolla/release/nova-conductor:30.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) nova_conductor 2025-07-12 20:47:12.452517 | orchestrator | c5bae609e3ba registry.osism.tech/kolla/release/designate-producer:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_producer 2025-07-12 20:47:12.452527 | orchestrator | c0a0277ef001 registry.osism.tech/kolla/release/designate-central:19.0.1.20250711 "dumb-init --single-…" 10 minutes ago Up 10 minutes (healthy) designate_central 2025-07-12 20:47:12.452538 | orchestrator | d654572f41ee registry.osism.tech/kolla/release/designate-api:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_api 2025-07-12 20:47:12.452549 | orchestrator | d9a73aae5175 registry.osism.tech/kolla/release/designate-backend-bind9:19.0.1.20250711 "dumb-init --single-…" 11 minutes ago Up 11 minutes (healthy) designate_backend_bind9 2025-07-12 20:47:12.452560 | orchestrator | c0a0d82d0276 registry.osism.tech/kolla/release/barbican-worker:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_worker 2025-07-12 20:47:12.452588 | orchestrator | 74d04be4aa50 registry.osism.tech/kolla/release/barbican-keystone-listener:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_keystone_listener 2025-07-12 20:47:12.452602 | orchestrator | 780cfbfd1746 registry.osism.tech/kolla/release/barbican-api:19.0.1.20250711 "dumb-init --single-…" 12 minutes ago Up 12 minutes (healthy) barbican_api 2025-07-12 20:47:12.452621 | orchestrator | e16fb2df88a7 registry.osism.tech/kolla/release/nova-api:30.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 13 minutes (healthy) nova_api 2025-07-12 20:47:12.452633 | orchestrator | a7e284c0a25e registry.osism.tech/kolla/release/nova-scheduler:30.0.1.20250711 "dumb-init --single-…" 13 minutes ago Up 9 minutes (healthy) nova_scheduler 2025-07-12 20:47:12.452647 | orchestrator | 92a8cff1e65b registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.8.0.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes prometheus_elasticsearch_exporter 2025-07-12 20:47:12.452660 | orchestrator | 98d87a07d08c registry.osism.tech/kolla/release/glance-api:29.0.1.20250711 "dumb-init --single-…" 14 minutes ago Up 14 minutes (healthy) glance_api 2025-07-12 20:47:12.452672 | orchestrator | 6c2573904af8 registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_cadvisor 2025-07-12 20:47:12.452685 | orchestrator | 686cd6548b4a registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.15.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_memcached_exporter 2025-07-12 20:47:12.452697 | orchestrator | 1b13681b5c2b registry.osism.tech/kolla/release/cinder-scheduler:25.2.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_scheduler 2025-07-12 20:47:12.452708 | orchestrator | 3027c9d5f14f registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.16.0.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_mysqld_exporter 2025-07-12 20:47:12.452719 | orchestrator | ed18242bfb2e registry.osism.tech/kolla/release/cinder-api:25.2.1.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes (healthy) cinder_api 2025-07-12 20:47:12.452730 | orchestrator | f4008975030c registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711 "dumb-init --single-…" 15 minutes ago Up 15 minutes prometheus_node_exporter 2025-07-12 20:47:12.452741 | orchestrator | a48f2ca1c58a registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mgr -…" 17 minutes ago Up 17 minutes ceph-mgr-testbed-node-2 2025-07-12 20:47:12.452759 | orchestrator | 667299cf13cb registry.osism.tech/kolla/release/keystone:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone 2025-07-12 20:47:12.452770 | orchestrator | 0b8ff71ed85c registry.osism.tech/kolla/release/keystone-fernet:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_fernet 2025-07-12 20:47:12.452781 | orchestrator | a8787daf4ddb registry.osism.tech/kolla/release/horizon:25.1.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) horizon 2025-07-12 20:47:12.452792 | orchestrator | d177569386f5 registry.osism.tech/kolla/release/keystone-ssh:26.0.1.20250711 "dumb-init --single-…" 19 minutes ago Up 19 minutes (healthy) keystone_ssh 2025-07-12 20:47:12.452803 | orchestrator | 1155dc9ce554 registry.osism.tech/kolla/release/opensearch-dashboards:2.19.2.20250711 "dumb-init --single-…" 21 minutes ago Up 21 minutes (healthy) opensearch_dashboards 2025-07-12 20:47:12.452814 | orchestrator | 8b3432405e4f registry.osism.tech/kolla/release/mariadb-server:10.11.13.20250711 "dumb-init -- kolla_…" 21 minutes ago Up 21 minutes (healthy) mariadb 2025-07-12 20:47:12.452825 | orchestrator | f639a84622f3 registry.osism.tech/kolla/release/opensearch:2.19.2.20250711 "dumb-init --single-…" 22 minutes ago Up 22 minutes (healthy) opensearch 2025-07-12 20:47:12.452843 | orchestrator | eae2076683b8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-crash" 24 minutes ago Up 24 minutes ceph-crash-testbed-node-2 2025-07-12 20:47:12.452861 | orchestrator | 724ee23cd2b0 registry.osism.tech/kolla/release/keepalived:2.2.7.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes keepalived 2025-07-12 20:47:12.452877 | orchestrator | e7f2325df640 registry.osism.tech/kolla/release/proxysql:2.7.3.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) proxysql 2025-07-12 20:47:12.452888 | orchestrator | bf29b34821d9 registry.osism.tech/kolla/release/haproxy:2.6.12.20250711 "dumb-init --single-…" 24 minutes ago Up 24 minutes (healthy) haproxy 2025-07-12 20:47:12.452899 | orchestrator | 564ed9b93aa7 registry.osism.tech/kolla/release/ovn-northd:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_northd 2025-07-12 20:47:12.452910 | orchestrator | 5253749641b8 registry.osism.tech/kolla/release/ovn-sb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_sb_db 2025-07-12 20:47:12.452921 | orchestrator | d636b6029ab2 registry.osism.tech/kolla/release/ovn-nb-db-server:24.9.2.20250711 "dumb-init --single-…" 27 minutes ago Up 27 minutes ovn_nb_db 2025-07-12 20:47:12.452932 | orchestrator | 35f810d040fa registry.osism.tech/kolla/release/rabbitmq:3.13.7.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes (healthy) rabbitmq 2025-07-12 20:47:12.452943 | orchestrator | b25cfc8c2bc7 registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711 "dumb-init --single-…" 28 minutes ago Up 28 minutes ovn_controller 2025-07-12 20:47:12.452953 | orchestrator | 47937d7fb0e8 registry.osism.tech/osism/ceph-daemon:18.2.7 "/usr/bin/ceph-mon -…" 28 minutes ago Up 28 minutes ceph-mon-testbed-node-2 2025-07-12 20:47:12.452964 | orchestrator | 60fd6cb49c1b registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 29 minutes (healthy) openvswitch_vswitchd 2025-07-12 20:47:12.453058 | orchestrator | 98c0cdc2fb99 registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) openvswitch_db 2025-07-12 20:47:12.453072 | orchestrator | 9046afb6450a registry.osism.tech/kolla/release/redis-sentinel:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis_sentinel 2025-07-12 20:47:12.453083 | orchestrator | e908e8cc98fc registry.osism.tech/kolla/release/redis:7.0.15.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) redis 2025-07-12 20:47:12.453094 | orchestrator | e773a6c987be registry.osism.tech/kolla/release/memcached:1.6.18.20250711 "dumb-init --single-…" 30 minutes ago Up 30 minutes (healthy) memcached 2025-07-12 20:47:12.453105 | orchestrator | df51b1a6bd72 registry.osism.tech/kolla/release/cron:3.0.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes cron 2025-07-12 20:47:12.453116 | orchestrator | 1798778a573b registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711 "dumb-init --single-…" 31 minutes ago Up 31 minutes kolla_toolbox 2025-07-12 20:47:12.453127 | orchestrator | 30e12c8090d3 registry.osism.tech/kolla/release/fluentd:5.0.7.20250711 "dumb-init --single-…" 32 minutes ago Up 32 minutes fluentd 2025-07-12 20:47:12.745928 | orchestrator | 2025-07-12 20:47:12.746089 | orchestrator | ## Images @ testbed-node-2 2025-07-12 20:47:12.746107 | orchestrator | 2025-07-12 20:47:12.746120 | orchestrator | + echo 2025-07-12 20:47:12.746132 | orchestrator | + echo '## Images @ testbed-node-2' 2025-07-12 20:47:12.746145 | orchestrator | + echo 2025-07-12 20:47:12.746156 | orchestrator | + osism container testbed-node-2 images 2025-07-12 20:47:14.969980 | orchestrator | REPOSITORY TAG IMAGE ID CREATED SIZE 2025-07-12 20:47:14.971061 | orchestrator | registry.osism.tech/kolla/release/fluentd 5.0.7.20250711 eaa70c1312aa 24 hours ago 628MB 2025-07-12 20:47:14.971142 | orchestrator | registry.osism.tech/kolla/release/keepalived 2.2.7.20250711 c7f6abdb2516 24 hours ago 329MB 2025-07-12 20:47:14.971158 | orchestrator | registry.osism.tech/kolla/release/haproxy 2.6.12.20250711 0a9fd950fe86 24 hours ago 326MB 2025-07-12 20:47:14.971170 | orchestrator | registry.osism.tech/kolla/release/opensearch 2.19.2.20250711 d8c44fac73c2 24 hours ago 1.59GB 2025-07-12 20:47:14.971181 | orchestrator | registry.osism.tech/kolla/release/opensearch-dashboards 2.19.2.20250711 db87020f3b90 24 hours ago 1.55GB 2025-07-12 20:47:14.971192 | orchestrator | registry.osism.tech/kolla/release/proxysql 2.7.3.20250711 4c6eaa052643 24 hours ago 417MB 2025-07-12 20:47:14.971203 | orchestrator | registry.osism.tech/kolla/release/memcached 1.6.18.20250711 cd87896ace76 24 hours ago 318MB 2025-07-12 20:47:14.971214 | orchestrator | registry.osism.tech/kolla/release/kolla-toolbox 19.5.1.20250711 ad526ea47263 24 hours ago 746MB 2025-07-12 20:47:14.971225 | orchestrator | registry.osism.tech/kolla/release/rabbitmq 3.13.7.20250711 4ce47f209c9b 24 hours ago 375MB 2025-07-12 20:47:14.971236 | orchestrator | registry.osism.tech/kolla/release/grafana 12.0.2.20250711 f4164dfd1b02 24 hours ago 1.01GB 2025-07-12 20:47:14.971247 | orchestrator | registry.osism.tech/kolla/release/cron 3.0.20250711 de0bd651bf89 24 hours ago 318MB 2025-07-12 20:47:14.971258 | orchestrator | registry.osism.tech/kolla/release/openvswitch-db-server 3.4.2.20250711 15f29551e6ce 24 hours ago 361MB 2025-07-12 20:47:14.971268 | orchestrator | registry.osism.tech/kolla/release/openvswitch-vswitchd 3.4.2.20250711 ea9ea8f197d8 24 hours ago 361MB 2025-07-12 20:47:14.971318 | orchestrator | registry.osism.tech/kolla/release/horizon 25.1.1.20250711 d4ae4a297d3b 24 hours ago 1.21GB 2025-07-12 20:47:14.971332 | orchestrator | registry.osism.tech/kolla/release/prometheus-mysqld-exporter 0.16.0.20250711 142dafde994c 24 hours ago 353MB 2025-07-12 20:47:14.971343 | orchestrator | registry.osism.tech/kolla/release/prometheus-cadvisor 0.49.2.20250711 937f4652a0d1 24 hours ago 410MB 2025-07-12 20:47:14.971353 | orchestrator | registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter 1.8.0.20250711 62e13ec7689a 24 hours ago 344MB 2025-07-12 20:47:14.971389 | orchestrator | registry.osism.tech/kolla/release/prometheus-node-exporter 1.8.2.20250711 361ce2873c65 24 hours ago 358MB 2025-07-12 20:47:14.971401 | orchestrator | registry.osism.tech/kolla/release/redis-sentinel 7.0.15.20250711 534f393a19e2 24 hours ago 324MB 2025-07-12 20:47:14.971427 | orchestrator | registry.osism.tech/kolla/release/prometheus-memcached-exporter 0.15.0.20250711 834c4c2dcd78 24 hours ago 351MB 2025-07-12 20:47:14.971438 | orchestrator | registry.osism.tech/kolla/release/redis 7.0.15.20250711 d7d5c3586026 24 hours ago 324MB 2025-07-12 20:47:14.971449 | orchestrator | registry.osism.tech/kolla/release/mariadb-server 10.11.13.20250711 5892b19e1064 24 hours ago 590MB 2025-07-12 20:47:14.971460 | orchestrator | registry.osism.tech/kolla/release/ovn-controller 24.9.2.20250711 65e36d1176bd 24 hours ago 947MB 2025-07-12 20:47:14.971471 | orchestrator | registry.osism.tech/kolla/release/ovn-sb-db-server 24.9.2.20250711 28654474dfe5 24 hours ago 946MB 2025-07-12 20:47:14.971505 | orchestrator | registry.osism.tech/kolla/release/ovn-northd 24.9.2.20250711 58ad45688234 24 hours ago 947MB 2025-07-12 20:47:14.971516 | orchestrator | registry.osism.tech/kolla/release/ovn-nb-db-server 24.9.2.20250711 affa47a97549 24 hours ago 946MB 2025-07-12 20:47:14.971527 | orchestrator | registry.osism.tech/kolla/release/octavia-health-manager 15.0.1.20250711 06deffb77b4f 24 hours ago 1.1GB 2025-07-12 20:47:14.971538 | orchestrator | registry.osism.tech/kolla/release/octavia-worker 15.0.1.20250711 02867223fb33 24 hours ago 1.1GB 2025-07-12 20:47:14.971549 | orchestrator | registry.osism.tech/kolla/release/octavia-api 15.0.1.20250711 6146c08f2b76 24 hours ago 1.12GB 2025-07-12 20:47:14.971559 | orchestrator | registry.osism.tech/kolla/release/octavia-housekeeping 15.0.1.20250711 6d529ee19c1c 24 hours ago 1.1GB 2025-07-12 20:47:14.971570 | orchestrator | registry.osism.tech/kolla/release/octavia-driver-agent 15.0.1.20250711 b1ed239b634f 24 hours ago 1.12GB 2025-07-12 20:47:14.971581 | orchestrator | registry.osism.tech/kolla/release/glance-api 29.0.1.20250711 65a4d0afbb1c 24 hours ago 1.15GB 2025-07-12 20:47:14.971609 | orchestrator | registry.osism.tech/kolla/release/placement-api 12.0.1.20250711 2b6bd346ad18 24 hours ago 1.04GB 2025-07-12 20:47:14.971620 | orchestrator | registry.osism.tech/kolla/release/barbican-api 19.0.1.20250711 1b7dd2682590 24 hours ago 1.06GB 2025-07-12 20:47:14.971631 | orchestrator | registry.osism.tech/kolla/release/barbican-worker 19.0.1.20250711 e475391ce44d 24 hours ago 1.06GB 2025-07-12 20:47:14.971642 | orchestrator | registry.osism.tech/kolla/release/barbican-keystone-listener 19.0.1.20250711 09290580fa03 24 hours ago 1.06GB 2025-07-12 20:47:14.971653 | orchestrator | registry.osism.tech/kolla/release/cinder-scheduler 25.2.1.20250711 a09a8be1b711 24 hours ago 1.41GB 2025-07-12 20:47:14.971663 | orchestrator | registry.osism.tech/kolla/release/cinder-api 25.2.1.20250711 c0d28e8febb9 24 hours ago 1.41GB 2025-07-12 20:47:14.971674 | orchestrator | registry.osism.tech/kolla/release/nova-scheduler 30.0.1.20250711 e0ad0ae52bef 24 hours ago 1.29GB 2025-07-12 20:47:14.971691 | orchestrator | registry.osism.tech/kolla/release/nova-novncproxy 30.0.1.20250711 b395cfe7f13f 24 hours ago 1.42GB 2025-07-12 20:47:14.971702 | orchestrator | registry.osism.tech/kolla/release/nova-api 30.0.1.20250711 ee83c124eb76 24 hours ago 1.29GB 2025-07-12 20:47:14.971713 | orchestrator | registry.osism.tech/kolla/release/nova-conductor 30.0.1.20250711 44e25b162470 24 hours ago 1.29GB 2025-07-12 20:47:14.971723 | orchestrator | registry.osism.tech/kolla/release/magnum-api 19.0.1.20250711 71f47d2b2def 24 hours ago 1.2GB 2025-07-12 20:47:14.971734 | orchestrator | registry.osism.tech/kolla/release/magnum-conductor 19.0.1.20250711 13b61cb4a5d2 24 hours ago 1.31GB 2025-07-12 20:47:14.971745 | orchestrator | registry.osism.tech/kolla/release/designate-mdns 19.0.1.20250711 a030b794eaa9 24 hours ago 1.05GB 2025-07-12 20:47:14.971756 | orchestrator | registry.osism.tech/kolla/release/designate-producer 19.0.1.20250711 2d0954c30848 24 hours ago 1.05GB 2025-07-12 20:47:14.971766 | orchestrator | registry.osism.tech/kolla/release/designate-central 19.0.1.20250711 f7fa0bcabe47 24 hours ago 1.05GB 2025-07-12 20:47:14.971778 | orchestrator | registry.osism.tech/kolla/release/designate-worker 19.0.1.20250711 4de726ebba0e 24 hours ago 1.06GB 2025-07-12 20:47:14.971789 | orchestrator | registry.osism.tech/kolla/release/designate-backend-bind9 19.0.1.20250711 a14c6ace0b24 24 hours ago 1.06GB 2025-07-12 20:47:14.971799 | orchestrator | registry.osism.tech/kolla/release/designate-api 19.0.1.20250711 2a2b32cdb83f 24 hours ago 1.05GB 2025-07-12 20:47:14.971817 | orchestrator | registry.osism.tech/kolla/release/keystone-fernet 26.0.1.20250711 53889b0cb73d 24 hours ago 1.11GB 2025-07-12 20:47:14.971828 | orchestrator | registry.osism.tech/kolla/release/keystone 26.0.1.20250711 caf4f12b4799 24 hours ago 1.13GB 2025-07-12 20:47:14.971839 | orchestrator | registry.osism.tech/kolla/release/keystone-ssh 26.0.1.20250711 3ba6da1abaea 24 hours ago 1.11GB 2025-07-12 20:47:14.971850 | orchestrator | registry.osism.tech/kolla/release/neutron-server 25.2.1.20250711 8377b7d24f73 24 hours ago 1.24GB 2025-07-12 20:47:14.971861 | orchestrator | registry.osism.tech/osism/ceph-daemon 18.2.7 5f92363b1f93 2 months ago 1.27GB 2025-07-12 20:47:15.281926 | orchestrator | + sh -c /opt/configuration/scripts/check-services.sh 2025-07-12 20:47:15.287487 | orchestrator | + set -e 2025-07-12 20:47:15.287577 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 20:47:15.289245 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 20:47:15.289387 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 20:47:15.289411 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 20:47:15.289428 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 20:47:15.289441 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 20:47:15.289453 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 20:47:15.289464 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 20:47:15.289475 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 20:47:15.289486 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 20:47:15.289497 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 20:47:15.289508 | orchestrator | ++ export ARA=false 2025-07-12 20:47:15.289519 | orchestrator | ++ ARA=false 2025-07-12 20:47:15.289544 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 20:47:15.289556 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 20:47:15.289567 | orchestrator | ++ export TEMPEST=false 2025-07-12 20:47:15.289578 | orchestrator | ++ TEMPEST=false 2025-07-12 20:47:15.289588 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 20:47:15.289599 | orchestrator | ++ IS_ZUUL=true 2025-07-12 20:47:15.289610 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 20:47:15.289626 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 20:47:15.289637 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 20:47:15.289648 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 20:47:15.289659 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 20:47:15.289670 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 20:47:15.289681 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 20:47:15.289692 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 20:47:15.289703 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 20:47:15.289714 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 20:47:15.289725 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-07-12 20:47:15.289736 | orchestrator | + sh -c /opt/configuration/scripts/check/100-ceph-with-ansible.sh 2025-07-12 20:47:15.299368 | orchestrator | + set -e 2025-07-12 20:47:15.299475 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 20:47:15.299497 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 20:47:15.299514 | orchestrator | ++ INTERACTIVE=false 2025-07-12 20:47:15.299531 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 20:47:15.299548 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 20:47:15.299566 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 20:47:15.300870 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 20:47:15.305986 | orchestrator | 2025-07-12 20:47:15.306073 | orchestrator | # Ceph status 2025-07-12 20:47:15.306083 | orchestrator | 2025-07-12 20:47:15.306097 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 20:47:15.306112 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 20:47:15.306126 | orchestrator | + echo 2025-07-12 20:47:15.306136 | orchestrator | + echo '# Ceph status' 2025-07-12 20:47:15.306144 | orchestrator | + echo 2025-07-12 20:47:15.306152 | orchestrator | + ceph -s 2025-07-12 20:47:15.921023 | orchestrator | cluster: 2025-07-12 20:47:15.921131 | orchestrator | id: 11111111-1111-1111-1111-111111111111 2025-07-12 20:47:15.921146 | orchestrator | health: HEALTH_OK 2025-07-12 20:47:15.921158 | orchestrator | 2025-07-12 20:47:15.921170 | orchestrator | services: 2025-07-12 20:47:15.921181 | orchestrator | mon: 3 daemons, quorum testbed-node-0,testbed-node-1,testbed-node-2 (age 28m) 2025-07-12 20:47:15.921193 | orchestrator | mgr: testbed-node-2(active, since 16m), standbys: testbed-node-1, testbed-node-0 2025-07-12 20:47:15.921231 | orchestrator | mds: 1/1 daemons up, 2 standby 2025-07-12 20:47:15.921242 | orchestrator | osd: 6 osds: 6 up (since 25m), 6 in (since 25m) 2025-07-12 20:47:15.921253 | orchestrator | rgw: 3 daemons active (3 hosts, 1 zones) 2025-07-12 20:47:15.921265 | orchestrator | 2025-07-12 20:47:15.921276 | orchestrator | data: 2025-07-12 20:47:15.921331 | orchestrator | volumes: 1/1 healthy 2025-07-12 20:47:15.921342 | orchestrator | pools: 14 pools, 401 pgs 2025-07-12 20:47:15.921354 | orchestrator | objects: 524 objects, 2.2 GiB 2025-07-12 20:47:15.921365 | orchestrator | usage: 7.1 GiB used, 113 GiB / 120 GiB avail 2025-07-12 20:47:15.921376 | orchestrator | pgs: 401 active+clean 2025-07-12 20:47:15.921387 | orchestrator | 2025-07-12 20:47:15.968107 | orchestrator | 2025-07-12 20:47:15.968199 | orchestrator | # Ceph versions 2025-07-12 20:47:15.968213 | orchestrator | 2025-07-12 20:47:15.968225 | orchestrator | + echo 2025-07-12 20:47:15.968236 | orchestrator | + echo '# Ceph versions' 2025-07-12 20:47:15.968248 | orchestrator | + echo 2025-07-12 20:47:15.968259 | orchestrator | + ceph versions 2025-07-12 20:47:16.603246 | orchestrator | { 2025-07-12 20:47:16.603388 | orchestrator | "mon": { 2025-07-12 20:47:16.603405 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 20:47:16.603418 | orchestrator | }, 2025-07-12 20:47:16.603429 | orchestrator | "mgr": { 2025-07-12 20:47:16.603440 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 20:47:16.603451 | orchestrator | }, 2025-07-12 20:47:16.603462 | orchestrator | "osd": { 2025-07-12 20:47:16.603473 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 6 2025-07-12 20:47:16.603484 | orchestrator | }, 2025-07-12 20:47:16.603494 | orchestrator | "mds": { 2025-07-12 20:47:16.603505 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 20:47:16.603516 | orchestrator | }, 2025-07-12 20:47:16.603534 | orchestrator | "rgw": { 2025-07-12 20:47:16.603550 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 3 2025-07-12 20:47:16.603561 | orchestrator | }, 2025-07-12 20:47:16.603572 | orchestrator | "overall": { 2025-07-12 20:47:16.603583 | orchestrator | "ceph version 18.2.7 (6b0e988052ec84cf2d4a54ff9bbbc5e720b621ad) reef (stable)": 18 2025-07-12 20:47:16.603594 | orchestrator | } 2025-07-12 20:47:16.603605 | orchestrator | } 2025-07-12 20:47:16.651490 | orchestrator | 2025-07-12 20:47:16.651596 | orchestrator | # Ceph OSD tree 2025-07-12 20:47:16.651611 | orchestrator | 2025-07-12 20:47:16.651624 | orchestrator | + echo 2025-07-12 20:47:16.651636 | orchestrator | + echo '# Ceph OSD tree' 2025-07-12 20:47:16.651653 | orchestrator | + echo 2025-07-12 20:47:16.651672 | orchestrator | + ceph osd df tree 2025-07-12 20:47:17.186653 | orchestrator | ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME 2025-07-12 20:47:17.186815 | orchestrator | -1 0.11691 - 120 GiB 7.1 GiB 6.7 GiB 6 KiB 430 MiB 113 GiB 5.92 1.00 - root default 2025-07-12 20:47:17.186841 | orchestrator | -7 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-3 2025-07-12 20:47:17.186883 | orchestrator | 0 hdd 0.01949 1.00000 20 GiB 1.6 GiB 1.5 GiB 1 KiB 74 MiB 18 GiB 7.84 1.33 191 up osd.0 2025-07-12 20:47:17.186904 | orchestrator | 5 hdd 0.01949 1.00000 20 GiB 816 MiB 747 MiB 1 KiB 70 MiB 19 GiB 3.99 0.67 197 up osd.5 2025-07-12 20:47:17.186923 | orchestrator | -3 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-4 2025-07-12 20:47:17.186944 | orchestrator | 1 hdd 0.01949 1.00000 20 GiB 992 MiB 923 MiB 1 KiB 70 MiB 19 GiB 4.85 0.82 209 up osd.1 2025-07-12 20:47:17.186965 | orchestrator | 3 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.98 1.18 181 up osd.3 2025-07-12 20:47:17.186985 | orchestrator | -5 0.03897 - 40 GiB 2.4 GiB 2.2 GiB 2 KiB 143 MiB 38 GiB 5.92 1.00 - host testbed-node-5 2025-07-12 20:47:17.187004 | orchestrator | 2 hdd 0.01949 1.00000 20 GiB 1.4 GiB 1.3 GiB 1 KiB 74 MiB 19 GiB 6.87 1.16 203 up osd.2 2025-07-12 20:47:17.187052 | orchestrator | 4 hdd 0.01949 1.00000 20 GiB 1016 MiB 947 MiB 1 KiB 70 MiB 19 GiB 4.97 0.84 189 up osd.4 2025-07-12 20:47:17.187072 | orchestrator | TOTAL 120 GiB 7.1 GiB 6.7 GiB 9.3 KiB 430 MiB 113 GiB 5.92 2025-07-12 20:47:17.187093 | orchestrator | MIN/MAX VAR: 0.67/1.33 STDDEV: 1.38 2025-07-12 20:47:17.235484 | orchestrator | 2025-07-12 20:47:17.235602 | orchestrator | # Ceph monitor status 2025-07-12 20:47:17.235623 | orchestrator | 2025-07-12 20:47:17.235639 | orchestrator | + echo 2025-07-12 20:47:17.235655 | orchestrator | + echo '# Ceph monitor status' 2025-07-12 20:47:17.235671 | orchestrator | + echo 2025-07-12 20:47:17.235686 | orchestrator | + ceph mon stat 2025-07-12 20:47:17.871432 | orchestrator | e1: 3 mons at {testbed-node-0=[v2:192.168.16.10:3300/0,v1:192.168.16.10:6789/0],testbed-node-1=[v2:192.168.16.11:3300/0,v1:192.168.16.11:6789/0],testbed-node-2=[v2:192.168.16.12:3300/0,v1:192.168.16.12:6789/0]} removed_ranks: {} disallowed_leaders: {}, election epoch 6, leader 0 testbed-node-0, quorum 0,1,2 testbed-node-0,testbed-node-1,testbed-node-2 2025-07-12 20:47:17.923155 | orchestrator | 2025-07-12 20:47:17.923251 | orchestrator | # Ceph quorum status 2025-07-12 20:47:17.923266 | orchestrator | 2025-07-12 20:47:17.923278 | orchestrator | + echo 2025-07-12 20:47:17.923316 | orchestrator | + echo '# Ceph quorum status' 2025-07-12 20:47:17.923327 | orchestrator | + echo 2025-07-12 20:47:17.924076 | orchestrator | + ceph quorum_status 2025-07-12 20:47:17.924192 | orchestrator | + jq 2025-07-12 20:47:18.584962 | orchestrator | { 2025-07-12 20:47:18.585083 | orchestrator | "election_epoch": 6, 2025-07-12 20:47:18.585098 | orchestrator | "quorum": [ 2025-07-12 20:47:18.585152 | orchestrator | 0, 2025-07-12 20:47:18.585164 | orchestrator | 1, 2025-07-12 20:47:18.585174 | orchestrator | 2 2025-07-12 20:47:18.585183 | orchestrator | ], 2025-07-12 20:47:18.585193 | orchestrator | "quorum_names": [ 2025-07-12 20:47:18.585203 | orchestrator | "testbed-node-0", 2025-07-12 20:47:18.585212 | orchestrator | "testbed-node-1", 2025-07-12 20:47:18.585222 | orchestrator | "testbed-node-2" 2025-07-12 20:47:18.585232 | orchestrator | ], 2025-07-12 20:47:18.585242 | orchestrator | "quorum_leader_name": "testbed-node-0", 2025-07-12 20:47:18.585253 | orchestrator | "quorum_age": 1733, 2025-07-12 20:47:18.585262 | orchestrator | "features": { 2025-07-12 20:47:18.585272 | orchestrator | "quorum_con": "4540138322906710015", 2025-07-12 20:47:18.585351 | orchestrator | "quorum_mon": [ 2025-07-12 20:47:18.585364 | orchestrator | "kraken", 2025-07-12 20:47:18.585373 | orchestrator | "luminous", 2025-07-12 20:47:18.585383 | orchestrator | "mimic", 2025-07-12 20:47:18.585393 | orchestrator | "osdmap-prune", 2025-07-12 20:47:18.585402 | orchestrator | "nautilus", 2025-07-12 20:47:18.585412 | orchestrator | "octopus", 2025-07-12 20:47:18.585421 | orchestrator | "pacific", 2025-07-12 20:47:18.585431 | orchestrator | "elector-pinging", 2025-07-12 20:47:18.585440 | orchestrator | "quincy", 2025-07-12 20:47:18.585450 | orchestrator | "reef" 2025-07-12 20:47:18.585471 | orchestrator | ] 2025-07-12 20:47:18.585481 | orchestrator | }, 2025-07-12 20:47:18.585491 | orchestrator | "monmap": { 2025-07-12 20:47:18.585501 | orchestrator | "epoch": 1, 2025-07-12 20:47:18.585511 | orchestrator | "fsid": "11111111-1111-1111-1111-111111111111", 2025-07-12 20:47:18.585530 | orchestrator | "modified": "2025-07-12T20:18:03.132882Z", 2025-07-12 20:47:18.585540 | orchestrator | "created": "2025-07-12T20:18:03.132882Z", 2025-07-12 20:47:18.585550 | orchestrator | "min_mon_release": 18, 2025-07-12 20:47:18.585560 | orchestrator | "min_mon_release_name": "reef", 2025-07-12 20:47:18.585569 | orchestrator | "election_strategy": 1, 2025-07-12 20:47:18.585579 | orchestrator | "disallowed_leaders: ": "", 2025-07-12 20:47:18.585588 | orchestrator | "stretch_mode": false, 2025-07-12 20:47:18.585598 | orchestrator | "tiebreaker_mon": "", 2025-07-12 20:47:18.585607 | orchestrator | "removed_ranks: ": "", 2025-07-12 20:47:18.585617 | orchestrator | "features": { 2025-07-12 20:47:18.585626 | orchestrator | "persistent": [ 2025-07-12 20:47:18.585636 | orchestrator | "kraken", 2025-07-12 20:47:18.585645 | orchestrator | "luminous", 2025-07-12 20:47:18.585654 | orchestrator | "mimic", 2025-07-12 20:47:18.585664 | orchestrator | "osdmap-prune", 2025-07-12 20:47:18.585673 | orchestrator | "nautilus", 2025-07-12 20:47:18.585683 | orchestrator | "octopus", 2025-07-12 20:47:18.585692 | orchestrator | "pacific", 2025-07-12 20:47:18.585701 | orchestrator | "elector-pinging", 2025-07-12 20:47:18.585711 | orchestrator | "quincy", 2025-07-12 20:47:18.585720 | orchestrator | "reef" 2025-07-12 20:47:18.585730 | orchestrator | ], 2025-07-12 20:47:18.585739 | orchestrator | "optional": [] 2025-07-12 20:47:18.585774 | orchestrator | }, 2025-07-12 20:47:18.585784 | orchestrator | "mons": [ 2025-07-12 20:47:18.585793 | orchestrator | { 2025-07-12 20:47:18.585803 | orchestrator | "rank": 0, 2025-07-12 20:47:18.585826 | orchestrator | "name": "testbed-node-0", 2025-07-12 20:47:18.585836 | orchestrator | "public_addrs": { 2025-07-12 20:47:18.585846 | orchestrator | "addrvec": [ 2025-07-12 20:47:18.585856 | orchestrator | { 2025-07-12 20:47:18.585865 | orchestrator | "type": "v2", 2025-07-12 20:47:18.585875 | orchestrator | "addr": "192.168.16.10:3300", 2025-07-12 20:47:18.585885 | orchestrator | "nonce": 0 2025-07-12 20:47:18.585894 | orchestrator | }, 2025-07-12 20:47:18.585904 | orchestrator | { 2025-07-12 20:47:18.585913 | orchestrator | "type": "v1", 2025-07-12 20:47:18.585923 | orchestrator | "addr": "192.168.16.10:6789", 2025-07-12 20:47:18.585932 | orchestrator | "nonce": 0 2025-07-12 20:47:18.585942 | orchestrator | } 2025-07-12 20:47:18.585951 | orchestrator | ] 2025-07-12 20:47:18.585961 | orchestrator | }, 2025-07-12 20:47:18.585970 | orchestrator | "addr": "192.168.16.10:6789/0", 2025-07-12 20:47:18.585980 | orchestrator | "public_addr": "192.168.16.10:6789/0", 2025-07-12 20:47:18.585989 | orchestrator | "priority": 0, 2025-07-12 20:47:18.585999 | orchestrator | "weight": 0, 2025-07-12 20:47:18.586008 | orchestrator | "crush_location": "{}" 2025-07-12 20:47:18.586069 | orchestrator | }, 2025-07-12 20:47:18.586081 | orchestrator | { 2025-07-12 20:47:18.586090 | orchestrator | "rank": 1, 2025-07-12 20:47:18.586100 | orchestrator | "name": "testbed-node-1", 2025-07-12 20:47:18.586110 | orchestrator | "public_addrs": { 2025-07-12 20:47:18.586119 | orchestrator | "addrvec": [ 2025-07-12 20:47:18.586130 | orchestrator | { 2025-07-12 20:47:18.586139 | orchestrator | "type": "v2", 2025-07-12 20:47:18.586149 | orchestrator | "addr": "192.168.16.11:3300", 2025-07-12 20:47:18.586159 | orchestrator | "nonce": 0 2025-07-12 20:47:18.586169 | orchestrator | }, 2025-07-12 20:47:18.586178 | orchestrator | { 2025-07-12 20:47:18.586188 | orchestrator | "type": "v1", 2025-07-12 20:47:18.586197 | orchestrator | "addr": "192.168.16.11:6789", 2025-07-12 20:47:18.586207 | orchestrator | "nonce": 0 2025-07-12 20:47:18.586217 | orchestrator | } 2025-07-12 20:47:18.586226 | orchestrator | ] 2025-07-12 20:47:18.586236 | orchestrator | }, 2025-07-12 20:47:18.586246 | orchestrator | "addr": "192.168.16.11:6789/0", 2025-07-12 20:47:18.586255 | orchestrator | "public_addr": "192.168.16.11:6789/0", 2025-07-12 20:47:18.586265 | orchestrator | "priority": 0, 2025-07-12 20:47:18.586274 | orchestrator | "weight": 0, 2025-07-12 20:47:18.586313 | orchestrator | "crush_location": "{}" 2025-07-12 20:47:18.586323 | orchestrator | }, 2025-07-12 20:47:18.586333 | orchestrator | { 2025-07-12 20:47:18.586342 | orchestrator | "rank": 2, 2025-07-12 20:47:18.586352 | orchestrator | "name": "testbed-node-2", 2025-07-12 20:47:18.586361 | orchestrator | "public_addrs": { 2025-07-12 20:47:18.586371 | orchestrator | "addrvec": [ 2025-07-12 20:47:18.586380 | orchestrator | { 2025-07-12 20:47:18.586390 | orchestrator | "type": "v2", 2025-07-12 20:47:18.586399 | orchestrator | "addr": "192.168.16.12:3300", 2025-07-12 20:47:18.586409 | orchestrator | "nonce": 0 2025-07-12 20:47:18.586419 | orchestrator | }, 2025-07-12 20:47:18.586428 | orchestrator | { 2025-07-12 20:47:18.586438 | orchestrator | "type": "v1", 2025-07-12 20:47:18.586447 | orchestrator | "addr": "192.168.16.12:6789", 2025-07-12 20:47:18.586457 | orchestrator | "nonce": 0 2025-07-12 20:47:18.586466 | orchestrator | } 2025-07-12 20:47:18.586476 | orchestrator | ] 2025-07-12 20:47:18.586485 | orchestrator | }, 2025-07-12 20:47:18.586495 | orchestrator | "addr": "192.168.16.12:6789/0", 2025-07-12 20:47:18.586505 | orchestrator | "public_addr": "192.168.16.12:6789/0", 2025-07-12 20:47:18.586514 | orchestrator | "priority": 0, 2025-07-12 20:47:18.586524 | orchestrator | "weight": 0, 2025-07-12 20:47:18.586533 | orchestrator | "crush_location": "{}" 2025-07-12 20:47:18.586543 | orchestrator | } 2025-07-12 20:47:18.586552 | orchestrator | ] 2025-07-12 20:47:18.586562 | orchestrator | } 2025-07-12 20:47:18.586572 | orchestrator | } 2025-07-12 20:47:18.586593 | orchestrator | 2025-07-12 20:47:18.586604 | orchestrator | # Ceph free space status 2025-07-12 20:47:18.586614 | orchestrator | 2025-07-12 20:47:18.586623 | orchestrator | + echo 2025-07-12 20:47:18.586633 | orchestrator | + echo '# Ceph free space status' 2025-07-12 20:47:18.586651 | orchestrator | + echo 2025-07-12 20:47:18.586661 | orchestrator | + ceph df 2025-07-12 20:47:19.188873 | orchestrator | --- RAW STORAGE --- 2025-07-12 20:47:19.189016 | orchestrator | CLASS SIZE AVAIL USED RAW USED %RAW USED 2025-07-12 20:47:19.189045 | orchestrator | hdd 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-12 20:47:19.189057 | orchestrator | TOTAL 120 GiB 113 GiB 7.1 GiB 7.1 GiB 5.92 2025-07-12 20:47:19.189069 | orchestrator | 2025-07-12 20:47:19.189080 | orchestrator | --- POOLS --- 2025-07-12 20:47:19.189092 | orchestrator | POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL 2025-07-12 20:47:19.189104 | orchestrator | .mgr 1 1 577 KiB 2 1.1 MiB 0 52 GiB 2025-07-12 20:47:19.189115 | orchestrator | cephfs_data 2 32 0 B 0 0 B 0 35 GiB 2025-07-12 20:47:19.189127 | orchestrator | cephfs_metadata 3 16 4.4 KiB 22 96 KiB 0 35 GiB 2025-07-12 20:47:19.189138 | orchestrator | default.rgw.buckets.data 4 32 0 B 0 0 B 0 35 GiB 2025-07-12 20:47:19.189149 | orchestrator | default.rgw.buckets.index 5 32 0 B 0 0 B 0 35 GiB 2025-07-12 20:47:19.189160 | orchestrator | default.rgw.control 6 32 0 B 8 0 B 0 35 GiB 2025-07-12 20:47:19.189171 | orchestrator | default.rgw.log 7 32 3.6 KiB 177 408 KiB 0 35 GiB 2025-07-12 20:47:19.189182 | orchestrator | default.rgw.meta 8 32 0 B 0 0 B 0 35 GiB 2025-07-12 20:47:19.189193 | orchestrator | .rgw.root 9 32 3.9 KiB 8 64 KiB 0 52 GiB 2025-07-12 20:47:19.189203 | orchestrator | backups 10 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 20:47:19.189214 | orchestrator | volumes 11 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 20:47:19.189225 | orchestrator | images 12 32 2.2 GiB 299 6.7 GiB 5.99 35 GiB 2025-07-12 20:47:19.189236 | orchestrator | metrics 13 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 20:47:19.189247 | orchestrator | vms 14 32 19 B 2 12 KiB 0 35 GiB 2025-07-12 20:47:19.233379 | orchestrator | ++ semver 9.2.0 5.0.0 2025-07-12 20:47:19.299881 | orchestrator | + [[ 1 -eq -1 ]] 2025-07-12 20:47:19.299982 | orchestrator | + [[ ! -e /etc/redhat-release ]] 2025-07-12 20:47:19.299997 | orchestrator | + osism apply facts 2025-07-12 20:47:31.320107 | orchestrator | 2025-07-12 20:47:31 | INFO  | Task 4760ec8e-e185-4210-89d1-1022ef80c393 (facts) was prepared for execution. 2025-07-12 20:47:31.320216 | orchestrator | 2025-07-12 20:47:31 | INFO  | It takes a moment until task 4760ec8e-e185-4210-89d1-1022ef80c393 (facts) has been started and output is visible here. 2025-07-12 20:47:44.560059 | orchestrator | 2025-07-12 20:47:44.560206 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-07-12 20:47:44.560225 | orchestrator | 2025-07-12 20:47:44.560236 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-07-12 20:47:44.560248 | orchestrator | Saturday 12 July 2025 20:47:35 +0000 (0:00:00.368) 0:00:00.368 ********* 2025-07-12 20:47:44.560259 | orchestrator | ok: [testbed-manager] 2025-07-12 20:47:44.560272 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:47:44.560282 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:47:44.560293 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:47:44.560353 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:47:44.560365 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:47:44.560376 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:47:44.560387 | orchestrator | 2025-07-12 20:47:44.560398 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-07-12 20:47:44.560410 | orchestrator | Saturday 12 July 2025 20:47:37 +0000 (0:00:01.521) 0:00:01.889 ********* 2025-07-12 20:47:44.560421 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:47:44.560433 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:47:44.560444 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:47:44.560455 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:47:44.560466 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:47:44.560476 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:47:44.560514 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:47:44.560526 | orchestrator | 2025-07-12 20:47:44.560537 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-07-12 20:47:44.560548 | orchestrator | 2025-07-12 20:47:44.560559 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-07-12 20:47:44.560570 | orchestrator | Saturday 12 July 2025 20:47:38 +0000 (0:00:01.363) 0:00:03.252 ********* 2025-07-12 20:47:44.560580 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:47:44.560591 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:47:44.560604 | orchestrator | ok: [testbed-manager] 2025-07-12 20:47:44.560616 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:47:44.560628 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:47:44.560640 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:47:44.560652 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:47:44.560664 | orchestrator | 2025-07-12 20:47:44.560676 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-07-12 20:47:44.560688 | orchestrator | 2025-07-12 20:47:44.560700 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-07-12 20:47:44.560713 | orchestrator | Saturday 12 July 2025 20:47:43 +0000 (0:00:05.069) 0:00:08.322 ********* 2025-07-12 20:47:44.560725 | orchestrator | skipping: [testbed-manager] 2025-07-12 20:47:44.560737 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:47:44.560749 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:47:44.560761 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:47:44.560773 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:47:44.560785 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:47:44.560797 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:47:44.560809 | orchestrator | 2025-07-12 20:47:44.560822 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:47:44.560835 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:47:44.560849 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:47:44.560862 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:47:44.560879 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:47:44.560917 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:47:44.560936 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:47:44.560953 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:47:44.560971 | orchestrator | 2025-07-12 20:47:44.560982 | orchestrator | 2025-07-12 20:47:44.560993 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:47:44.561004 | orchestrator | Saturday 12 July 2025 20:47:44 +0000 (0:00:00.583) 0:00:08.906 ********* 2025-07-12 20:47:44.561015 | orchestrator | =============================================================================== 2025-07-12 20:47:44.561026 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.07s 2025-07-12 20:47:44.561037 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.52s 2025-07-12 20:47:44.561048 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.36s 2025-07-12 20:47:44.561059 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.58s 2025-07-12 20:47:44.864854 | orchestrator | + osism validate ceph-mons 2025-07-12 20:48:16.489725 | orchestrator | 2025-07-12 20:48:16.489917 | orchestrator | PLAY [Ceph validate mons] ****************************************************** 2025-07-12 20:48:16.489939 | orchestrator | 2025-07-12 20:48:16.489952 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 20:48:16.489964 | orchestrator | Saturday 12 July 2025 20:48:01 +0000 (0:00:00.459) 0:00:00.460 ********* 2025-07-12 20:48:16.489976 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:16.489987 | orchestrator | 2025-07-12 20:48:16.489998 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 20:48:16.490009 | orchestrator | Saturday 12 July 2025 20:48:01 +0000 (0:00:00.645) 0:00:01.105 ********* 2025-07-12 20:48:16.490087 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:16.490099 | orchestrator | 2025-07-12 20:48:16.490110 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 20:48:16.490121 | orchestrator | Saturday 12 July 2025 20:48:02 +0000 (0:00:00.829) 0:00:01.935 ********* 2025-07-12 20:48:16.490132 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.490144 | orchestrator | 2025-07-12 20:48:16.490155 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-12 20:48:16.490167 | orchestrator | Saturday 12 July 2025 20:48:02 +0000 (0:00:00.242) 0:00:02.177 ********* 2025-07-12 20:48:16.490178 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.490189 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:48:16.490200 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:48:16.490210 | orchestrator | 2025-07-12 20:48:16.490230 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-12 20:48:16.490241 | orchestrator | Saturday 12 July 2025 20:48:03 +0000 (0:00:00.298) 0:00:02.476 ********* 2025-07-12 20:48:16.490252 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.490263 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:48:16.490274 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:48:16.490285 | orchestrator | 2025-07-12 20:48:16.490296 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-12 20:48:16.490307 | orchestrator | Saturday 12 July 2025 20:48:04 +0000 (0:00:00.956) 0:00:03.433 ********* 2025-07-12 20:48:16.490318 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.490350 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:48:16.490361 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:48:16.490372 | orchestrator | 2025-07-12 20:48:16.490383 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-12 20:48:16.490394 | orchestrator | Saturday 12 July 2025 20:48:04 +0000 (0:00:00.291) 0:00:03.724 ********* 2025-07-12 20:48:16.490405 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.490416 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:48:16.490427 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:48:16.490437 | orchestrator | 2025-07-12 20:48:16.490449 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:48:16.490460 | orchestrator | Saturday 12 July 2025 20:48:04 +0000 (0:00:00.534) 0:00:04.258 ********* 2025-07-12 20:48:16.490470 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.490481 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:48:16.490492 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:48:16.490503 | orchestrator | 2025-07-12 20:48:16.490514 | orchestrator | TASK [Set test result to failed if ceph-mon is not running] ******************** 2025-07-12 20:48:16.490525 | orchestrator | Saturday 12 July 2025 20:48:05 +0000 (0:00:00.311) 0:00:04.570 ********* 2025-07-12 20:48:16.490536 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.490546 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:48:16.490557 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:48:16.490568 | orchestrator | 2025-07-12 20:48:16.490579 | orchestrator | TASK [Set test result to passed if ceph-mon is running] ************************ 2025-07-12 20:48:16.490590 | orchestrator | Saturday 12 July 2025 20:48:05 +0000 (0:00:00.313) 0:00:04.884 ********* 2025-07-12 20:48:16.490601 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.490612 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:48:16.490705 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:48:16.490717 | orchestrator | 2025-07-12 20:48:16.490728 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:48:16.490739 | orchestrator | Saturday 12 July 2025 20:48:05 +0000 (0:00:00.324) 0:00:05.208 ********* 2025-07-12 20:48:16.490750 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.490761 | orchestrator | 2025-07-12 20:48:16.490772 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:48:16.490782 | orchestrator | Saturday 12 July 2025 20:48:06 +0000 (0:00:00.670) 0:00:05.879 ********* 2025-07-12 20:48:16.490793 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.490804 | orchestrator | 2025-07-12 20:48:16.490815 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:48:16.490826 | orchestrator | Saturday 12 July 2025 20:48:06 +0000 (0:00:00.235) 0:00:06.114 ********* 2025-07-12 20:48:16.490836 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.490847 | orchestrator | 2025-07-12 20:48:16.490858 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:16.490869 | orchestrator | Saturday 12 July 2025 20:48:07 +0000 (0:00:00.253) 0:00:06.367 ********* 2025-07-12 20:48:16.490880 | orchestrator | 2025-07-12 20:48:16.490891 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:16.490902 | orchestrator | Saturday 12 July 2025 20:48:07 +0000 (0:00:00.070) 0:00:06.437 ********* 2025-07-12 20:48:16.490912 | orchestrator | 2025-07-12 20:48:16.490923 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:16.490934 | orchestrator | Saturday 12 July 2025 20:48:07 +0000 (0:00:00.075) 0:00:06.512 ********* 2025-07-12 20:48:16.490945 | orchestrator | 2025-07-12 20:48:16.490955 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:48:16.490966 | orchestrator | Saturday 12 July 2025 20:48:07 +0000 (0:00:00.073) 0:00:06.586 ********* 2025-07-12 20:48:16.490977 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.490987 | orchestrator | 2025-07-12 20:48:16.490998 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-12 20:48:16.491009 | orchestrator | Saturday 12 July 2025 20:48:07 +0000 (0:00:00.250) 0:00:06.836 ********* 2025-07-12 20:48:16.491020 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.491031 | orchestrator | 2025-07-12 20:48:16.491062 | orchestrator | TASK [Prepare quorum test vars] ************************************************ 2025-07-12 20:48:16.491074 | orchestrator | Saturday 12 July 2025 20:48:07 +0000 (0:00:00.236) 0:00:07.072 ********* 2025-07-12 20:48:16.491084 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.491095 | orchestrator | 2025-07-12 20:48:16.491106 | orchestrator | TASK [Get monmap info from one mon container] ********************************** 2025-07-12 20:48:16.491117 | orchestrator | Saturday 12 July 2025 20:48:07 +0000 (0:00:00.118) 0:00:07.190 ********* 2025-07-12 20:48:16.491128 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:48:16.491139 | orchestrator | 2025-07-12 20:48:16.491150 | orchestrator | TASK [Set quorum test data] **************************************************** 2025-07-12 20:48:16.491161 | orchestrator | Saturday 12 July 2025 20:48:09 +0000 (0:00:01.496) 0:00:08.687 ********* 2025-07-12 20:48:16.491172 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.491182 | orchestrator | 2025-07-12 20:48:16.491193 | orchestrator | TASK [Fail quorum test if not all monitors are in quorum] ********************** 2025-07-12 20:48:16.491204 | orchestrator | Saturday 12 July 2025 20:48:09 +0000 (0:00:00.300) 0:00:08.988 ********* 2025-07-12 20:48:16.491215 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.491226 | orchestrator | 2025-07-12 20:48:16.491236 | orchestrator | TASK [Pass quorum test if all monitors are in quorum] ************************** 2025-07-12 20:48:16.491247 | orchestrator | Saturday 12 July 2025 20:48:09 +0000 (0:00:00.330) 0:00:09.318 ********* 2025-07-12 20:48:16.491258 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.491269 | orchestrator | 2025-07-12 20:48:16.491280 | orchestrator | TASK [Set fsid test vars] ****************************************************** 2025-07-12 20:48:16.491298 | orchestrator | Saturday 12 July 2025 20:48:10 +0000 (0:00:00.342) 0:00:09.661 ********* 2025-07-12 20:48:16.491309 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.491320 | orchestrator | 2025-07-12 20:48:16.491345 | orchestrator | TASK [Fail Cluster FSID test if FSID does not match configuration] ************* 2025-07-12 20:48:16.491356 | orchestrator | Saturday 12 July 2025 20:48:10 +0000 (0:00:00.347) 0:00:10.008 ********* 2025-07-12 20:48:16.491367 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.491377 | orchestrator | 2025-07-12 20:48:16.491388 | orchestrator | TASK [Pass Cluster FSID test if it matches configuration] ********************** 2025-07-12 20:48:16.491399 | orchestrator | Saturday 12 July 2025 20:48:10 +0000 (0:00:00.127) 0:00:10.136 ********* 2025-07-12 20:48:16.491410 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.491420 | orchestrator | 2025-07-12 20:48:16.491431 | orchestrator | TASK [Prepare status test vars] ************************************************ 2025-07-12 20:48:16.491442 | orchestrator | Saturday 12 July 2025 20:48:10 +0000 (0:00:00.144) 0:00:10.281 ********* 2025-07-12 20:48:16.491452 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.491463 | orchestrator | 2025-07-12 20:48:16.491474 | orchestrator | TASK [Gather status data] ****************************************************** 2025-07-12 20:48:16.491485 | orchestrator | Saturday 12 July 2025 20:48:11 +0000 (0:00:00.136) 0:00:10.417 ********* 2025-07-12 20:48:16.491495 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:48:16.491506 | orchestrator | 2025-07-12 20:48:16.491517 | orchestrator | TASK [Set health test data] **************************************************** 2025-07-12 20:48:16.491527 | orchestrator | Saturday 12 July 2025 20:48:12 +0000 (0:00:01.245) 0:00:11.663 ********* 2025-07-12 20:48:16.491538 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.491549 | orchestrator | 2025-07-12 20:48:16.491560 | orchestrator | TASK [Fail cluster-health if health is not acceptable] ************************* 2025-07-12 20:48:16.491570 | orchestrator | Saturday 12 July 2025 20:48:12 +0000 (0:00:00.299) 0:00:11.962 ********* 2025-07-12 20:48:16.491581 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.491592 | orchestrator | 2025-07-12 20:48:16.491602 | orchestrator | TASK [Pass cluster-health if health is acceptable] ***************************** 2025-07-12 20:48:16.491613 | orchestrator | Saturday 12 July 2025 20:48:12 +0000 (0:00:00.142) 0:00:12.105 ********* 2025-07-12 20:48:16.491624 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:16.491635 | orchestrator | 2025-07-12 20:48:16.491645 | orchestrator | TASK [Fail cluster-health if health is not acceptable (strict)] **************** 2025-07-12 20:48:16.491656 | orchestrator | Saturday 12 July 2025 20:48:12 +0000 (0:00:00.160) 0:00:12.265 ********* 2025-07-12 20:48:16.491667 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.491678 | orchestrator | 2025-07-12 20:48:16.491689 | orchestrator | TASK [Pass cluster-health if status is OK (strict)] **************************** 2025-07-12 20:48:16.491700 | orchestrator | Saturday 12 July 2025 20:48:13 +0000 (0:00:00.127) 0:00:12.392 ********* 2025-07-12 20:48:16.491710 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.491721 | orchestrator | 2025-07-12 20:48:16.491732 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 20:48:16.491743 | orchestrator | Saturday 12 July 2025 20:48:13 +0000 (0:00:00.344) 0:00:12.737 ********* 2025-07-12 20:48:16.491753 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:16.491764 | orchestrator | 2025-07-12 20:48:16.491775 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 20:48:16.491785 | orchestrator | Saturday 12 July 2025 20:48:13 +0000 (0:00:00.246) 0:00:12.983 ********* 2025-07-12 20:48:16.491796 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:16.491806 | orchestrator | 2025-07-12 20:48:16.491817 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:48:16.491828 | orchestrator | Saturday 12 July 2025 20:48:13 +0000 (0:00:00.269) 0:00:13.252 ********* 2025-07-12 20:48:16.491839 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:16.491849 | orchestrator | 2025-07-12 20:48:16.491861 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:48:16.491878 | orchestrator | Saturday 12 July 2025 20:48:15 +0000 (0:00:01.797) 0:00:15.050 ********* 2025-07-12 20:48:16.491889 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:16.491899 | orchestrator | 2025-07-12 20:48:16.491910 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:48:16.491921 | orchestrator | Saturday 12 July 2025 20:48:16 +0000 (0:00:00.277) 0:00:15.328 ********* 2025-07-12 20:48:16.491931 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:16.491942 | orchestrator | 2025-07-12 20:48:16.491960 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:18.686638 | orchestrator | Saturday 12 July 2025 20:48:16 +0000 (0:00:00.254) 0:00:15.582 ********* 2025-07-12 20:48:18.686745 | orchestrator | 2025-07-12 20:48:18.686763 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:18.686776 | orchestrator | Saturday 12 July 2025 20:48:16 +0000 (0:00:00.071) 0:00:15.654 ********* 2025-07-12 20:48:18.686788 | orchestrator | 2025-07-12 20:48:18.686799 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:18.686810 | orchestrator | Saturday 12 July 2025 20:48:16 +0000 (0:00:00.069) 0:00:15.723 ********* 2025-07-12 20:48:18.686825 | orchestrator | 2025-07-12 20:48:18.686836 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 20:48:18.686847 | orchestrator | Saturday 12 July 2025 20:48:16 +0000 (0:00:00.074) 0:00:15.797 ********* 2025-07-12 20:48:18.686859 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:18.686870 | orchestrator | 2025-07-12 20:48:18.686881 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:48:18.686891 | orchestrator | Saturday 12 July 2025 20:48:17 +0000 (0:00:01.322) 0:00:17.120 ********* 2025-07-12 20:48:18.686902 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-12 20:48:18.686913 | orchestrator |  "msg": [ 2025-07-12 20:48:18.686925 | orchestrator |  "Validator run completed.", 2025-07-12 20:48:18.686958 | orchestrator |  "You can find the report file here:", 2025-07-12 20:48:18.686975 | orchestrator |  "/opt/reports/validator/ceph-mons-validator-2025-07-12T20:48:01+00:00-report.json", 2025-07-12 20:48:18.686987 | orchestrator |  "on the following host:", 2025-07-12 20:48:18.686998 | orchestrator |  "testbed-manager" 2025-07-12 20:48:18.687009 | orchestrator |  ] 2025-07-12 20:48:18.687020 | orchestrator | } 2025-07-12 20:48:18.687031 | orchestrator | 2025-07-12 20:48:18.687043 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:48:18.687055 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-07-12 20:48:18.687067 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:48:18.687079 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:48:18.687104 | orchestrator | 2025-07-12 20:48:18.687116 | orchestrator | 2025-07-12 20:48:18.687126 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:48:18.687137 | orchestrator | Saturday 12 July 2025 20:48:18 +0000 (0:00:00.582) 0:00:17.703 ********* 2025-07-12 20:48:18.687148 | orchestrator | =============================================================================== 2025-07-12 20:48:18.687159 | orchestrator | Aggregate test results step one ----------------------------------------- 1.80s 2025-07-12 20:48:18.687170 | orchestrator | Get monmap info from one mon container ---------------------------------- 1.50s 2025-07-12 20:48:18.687183 | orchestrator | Write report file ------------------------------------------------------- 1.32s 2025-07-12 20:48:18.687195 | orchestrator | Gather status data ------------------------------------------------------ 1.25s 2025-07-12 20:48:18.687235 | orchestrator | Get container info ------------------------------------------------------ 0.96s 2025-07-12 20:48:18.687248 | orchestrator | Create report output directory ------------------------------------------ 0.83s 2025-07-12 20:48:18.687260 | orchestrator | Aggregate test results step one ----------------------------------------- 0.67s 2025-07-12 20:48:18.687273 | orchestrator | Get timestamp for report file ------------------------------------------- 0.65s 2025-07-12 20:48:18.687285 | orchestrator | Print report file information ------------------------------------------- 0.58s 2025-07-12 20:48:18.687298 | orchestrator | Set test result to passed if container is existing ---------------------- 0.53s 2025-07-12 20:48:18.687309 | orchestrator | Set fsid test vars ------------------------------------------------------ 0.35s 2025-07-12 20:48:18.687320 | orchestrator | Pass cluster-health if status is OK (strict) ---------------------------- 0.34s 2025-07-12 20:48:18.687355 | orchestrator | Pass quorum test if all monitors are in quorum -------------------------- 0.34s 2025-07-12 20:48:18.687368 | orchestrator | Fail quorum test if not all monitors are in quorum ---------------------- 0.33s 2025-07-12 20:48:18.687379 | orchestrator | Set test result to passed if ceph-mon is running ------------------------ 0.32s 2025-07-12 20:48:18.687389 | orchestrator | Set test result to failed if ceph-mon is not running -------------------- 0.31s 2025-07-12 20:48:18.687400 | orchestrator | Prepare test data ------------------------------------------------------- 0.31s 2025-07-12 20:48:18.687411 | orchestrator | Set quorum test data ---------------------------------------------------- 0.30s 2025-07-12 20:48:18.687422 | orchestrator | Set health test data ---------------------------------------------------- 0.30s 2025-07-12 20:48:18.687433 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-07-12 20:48:18.968972 | orchestrator | + osism validate ceph-mgrs 2025-07-12 20:48:51.271238 | orchestrator | 2025-07-12 20:48:51.271451 | orchestrator | PLAY [Ceph validate mgrs] ****************************************************** 2025-07-12 20:48:51.271474 | orchestrator | 2025-07-12 20:48:51.271486 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 20:48:51.271499 | orchestrator | Saturday 12 July 2025 20:48:35 +0000 (0:00:00.443) 0:00:00.443 ********* 2025-07-12 20:48:51.271510 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:51.271522 | orchestrator | 2025-07-12 20:48:51.271533 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 20:48:51.271544 | orchestrator | Saturday 12 July 2025 20:48:37 +0000 (0:00:01.666) 0:00:02.110 ********* 2025-07-12 20:48:51.271555 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:51.271567 | orchestrator | 2025-07-12 20:48:51.271578 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 20:48:51.271589 | orchestrator | Saturday 12 July 2025 20:48:37 +0000 (0:00:00.861) 0:00:02.971 ********* 2025-07-12 20:48:51.271600 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:51.271612 | orchestrator | 2025-07-12 20:48:51.271623 | orchestrator | TASK [Prepare test data for container existance test] ************************** 2025-07-12 20:48:51.271634 | orchestrator | Saturday 12 July 2025 20:48:38 +0000 (0:00:00.256) 0:00:03.227 ********* 2025-07-12 20:48:51.271645 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:51.271656 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:48:51.271667 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:48:51.271678 | orchestrator | 2025-07-12 20:48:51.271689 | orchestrator | TASK [Get container info] ****************************************************** 2025-07-12 20:48:51.271702 | orchestrator | Saturday 12 July 2025 20:48:38 +0000 (0:00:00.299) 0:00:03.527 ********* 2025-07-12 20:48:51.271714 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:48:51.271726 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:48:51.271738 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:51.271751 | orchestrator | 2025-07-12 20:48:51.271763 | orchestrator | TASK [Set test result to failed if container is missing] *********************** 2025-07-12 20:48:51.271776 | orchestrator | Saturday 12 July 2025 20:48:39 +0000 (0:00:00.992) 0:00:04.520 ********* 2025-07-12 20:48:51.271843 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:51.271872 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:48:51.271885 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:48:51.271899 | orchestrator | 2025-07-12 20:48:51.271911 | orchestrator | TASK [Set test result to passed if container is existing] ********************** 2025-07-12 20:48:51.271923 | orchestrator | Saturday 12 July 2025 20:48:39 +0000 (0:00:00.288) 0:00:04.808 ********* 2025-07-12 20:48:51.271935 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:51.271947 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:48:51.271959 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:48:51.271971 | orchestrator | 2025-07-12 20:48:51.271983 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:48:51.271995 | orchestrator | Saturday 12 July 2025 20:48:40 +0000 (0:00:00.497) 0:00:05.306 ********* 2025-07-12 20:48:51.272007 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:51.272019 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:48:51.272032 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:48:51.272044 | orchestrator | 2025-07-12 20:48:51.272056 | orchestrator | TASK [Set test result to failed if ceph-mgr is not running] ******************** 2025-07-12 20:48:51.272067 | orchestrator | Saturday 12 July 2025 20:48:40 +0000 (0:00:00.302) 0:00:05.609 ********* 2025-07-12 20:48:51.272078 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:51.272089 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:48:51.272100 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:48:51.272111 | orchestrator | 2025-07-12 20:48:51.272122 | orchestrator | TASK [Set test result to passed if ceph-mgr is running] ************************ 2025-07-12 20:48:51.272132 | orchestrator | Saturday 12 July 2025 20:48:40 +0000 (0:00:00.283) 0:00:05.893 ********* 2025-07-12 20:48:51.272143 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:51.272154 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:48:51.272165 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:48:51.272175 | orchestrator | 2025-07-12 20:48:51.272186 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:48:51.272197 | orchestrator | Saturday 12 July 2025 20:48:41 +0000 (0:00:00.312) 0:00:06.205 ********* 2025-07-12 20:48:51.272208 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:51.272219 | orchestrator | 2025-07-12 20:48:51.272230 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:48:51.272240 | orchestrator | Saturday 12 July 2025 20:48:41 +0000 (0:00:00.649) 0:00:06.854 ********* 2025-07-12 20:48:51.272251 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:51.272262 | orchestrator | 2025-07-12 20:48:51.272273 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:48:51.272283 | orchestrator | Saturday 12 July 2025 20:48:42 +0000 (0:00:00.245) 0:00:07.099 ********* 2025-07-12 20:48:51.272294 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:51.272305 | orchestrator | 2025-07-12 20:48:51.272316 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:51.272327 | orchestrator | Saturday 12 July 2025 20:48:42 +0000 (0:00:00.246) 0:00:07.346 ********* 2025-07-12 20:48:51.272337 | orchestrator | 2025-07-12 20:48:51.272369 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:51.272382 | orchestrator | Saturday 12 July 2025 20:48:42 +0000 (0:00:00.081) 0:00:07.428 ********* 2025-07-12 20:48:51.272392 | orchestrator | 2025-07-12 20:48:51.272404 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:51.272415 | orchestrator | Saturday 12 July 2025 20:48:42 +0000 (0:00:00.067) 0:00:07.495 ********* 2025-07-12 20:48:51.272425 | orchestrator | 2025-07-12 20:48:51.272436 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:48:51.272447 | orchestrator | Saturday 12 July 2025 20:48:42 +0000 (0:00:00.073) 0:00:07.569 ********* 2025-07-12 20:48:51.272458 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:51.272469 | orchestrator | 2025-07-12 20:48:51.272480 | orchestrator | TASK [Fail due to missing containers] ****************************************** 2025-07-12 20:48:51.272499 | orchestrator | Saturday 12 July 2025 20:48:42 +0000 (0:00:00.254) 0:00:07.824 ********* 2025-07-12 20:48:51.272510 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:51.272521 | orchestrator | 2025-07-12 20:48:51.272551 | orchestrator | TASK [Define mgr module test vars] ********************************************* 2025-07-12 20:48:51.272563 | orchestrator | Saturday 12 July 2025 20:48:43 +0000 (0:00:00.237) 0:00:08.061 ********* 2025-07-12 20:48:51.272574 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:51.272585 | orchestrator | 2025-07-12 20:48:51.272596 | orchestrator | TASK [Gather list of mgr modules] ********************************************** 2025-07-12 20:48:51.272606 | orchestrator | Saturday 12 July 2025 20:48:43 +0000 (0:00:00.131) 0:00:08.193 ********* 2025-07-12 20:48:51.272617 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:48:51.272628 | orchestrator | 2025-07-12 20:48:51.272639 | orchestrator | TASK [Parse mgr module list from json] ***************************************** 2025-07-12 20:48:51.272650 | orchestrator | Saturday 12 July 2025 20:48:45 +0000 (0:00:01.891) 0:00:10.084 ********* 2025-07-12 20:48:51.272660 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:51.272671 | orchestrator | 2025-07-12 20:48:51.272682 | orchestrator | TASK [Extract list of enabled mgr modules] ************************************* 2025-07-12 20:48:51.272693 | orchestrator | Saturday 12 July 2025 20:48:45 +0000 (0:00:00.258) 0:00:10.342 ********* 2025-07-12 20:48:51.272703 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:51.272714 | orchestrator | 2025-07-12 20:48:51.272725 | orchestrator | TASK [Fail test if mgr modules are disabled that should be enabled] ************ 2025-07-12 20:48:51.272736 | orchestrator | Saturday 12 July 2025 20:48:46 +0000 (0:00:00.883) 0:00:11.225 ********* 2025-07-12 20:48:51.272746 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:51.272757 | orchestrator | 2025-07-12 20:48:51.272768 | orchestrator | TASK [Pass test if required mgr modules are enabled] *************************** 2025-07-12 20:48:51.272779 | orchestrator | Saturday 12 July 2025 20:48:46 +0000 (0:00:00.136) 0:00:11.362 ********* 2025-07-12 20:48:51.272789 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:48:51.272800 | orchestrator | 2025-07-12 20:48:51.272811 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 20:48:51.272822 | orchestrator | Saturday 12 July 2025 20:48:46 +0000 (0:00:00.161) 0:00:11.523 ********* 2025-07-12 20:48:51.272832 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:51.272843 | orchestrator | 2025-07-12 20:48:51.272854 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 20:48:51.272864 | orchestrator | Saturday 12 July 2025 20:48:46 +0000 (0:00:00.252) 0:00:11.776 ********* 2025-07-12 20:48:51.272875 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:48:51.272886 | orchestrator | 2025-07-12 20:48:51.272897 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:48:51.272908 | orchestrator | Saturday 12 July 2025 20:48:47 +0000 (0:00:00.252) 0:00:12.028 ********* 2025-07-12 20:48:51.272918 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:51.272929 | orchestrator | 2025-07-12 20:48:51.272940 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:48:51.272950 | orchestrator | Saturday 12 July 2025 20:48:48 +0000 (0:00:01.266) 0:00:13.295 ********* 2025-07-12 20:48:51.272961 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:51.272972 | orchestrator | 2025-07-12 20:48:51.272982 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:48:51.272993 | orchestrator | Saturday 12 July 2025 20:48:48 +0000 (0:00:00.263) 0:00:13.558 ********* 2025-07-12 20:48:51.273004 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:51.273014 | orchestrator | 2025-07-12 20:48:51.273025 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:51.273036 | orchestrator | Saturday 12 July 2025 20:48:48 +0000 (0:00:00.281) 0:00:13.840 ********* 2025-07-12 20:48:51.273046 | orchestrator | 2025-07-12 20:48:51.273057 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:51.273074 | orchestrator | Saturday 12 July 2025 20:48:48 +0000 (0:00:00.068) 0:00:13.909 ********* 2025-07-12 20:48:51.273085 | orchestrator | 2025-07-12 20:48:51.273096 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:48:51.273107 | orchestrator | Saturday 12 July 2025 20:48:48 +0000 (0:00:00.066) 0:00:13.975 ********* 2025-07-12 20:48:51.273117 | orchestrator | 2025-07-12 20:48:51.273128 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 20:48:51.273138 | orchestrator | Saturday 12 July 2025 20:48:49 +0000 (0:00:00.075) 0:00:14.050 ********* 2025-07-12 20:48:51.273149 | orchestrator | changed: [testbed-node-0 -> testbed-manager(192.168.16.5)] 2025-07-12 20:48:51.273160 | orchestrator | 2025-07-12 20:48:51.273170 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:48:51.273181 | orchestrator | Saturday 12 July 2025 20:48:50 +0000 (0:00:01.791) 0:00:15.842 ********* 2025-07-12 20:48:51.273192 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => { 2025-07-12 20:48:51.273203 | orchestrator |  "msg": [ 2025-07-12 20:48:51.273214 | orchestrator |  "Validator run completed.", 2025-07-12 20:48:51.273224 | orchestrator |  "You can find the report file here:", 2025-07-12 20:48:51.273235 | orchestrator |  "/opt/reports/validator/ceph-mgrs-validator-2025-07-12T20:48:35+00:00-report.json", 2025-07-12 20:48:51.273251 | orchestrator |  "on the following host:", 2025-07-12 20:48:51.273270 | orchestrator |  "testbed-manager" 2025-07-12 20:48:51.273289 | orchestrator |  ] 2025-07-12 20:48:51.273307 | orchestrator | } 2025-07-12 20:48:51.273325 | orchestrator | 2025-07-12 20:48:51.273342 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:48:51.273384 | orchestrator | testbed-node-0 : ok=19  changed=3  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 20:48:51.273405 | orchestrator | testbed-node-1 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:48:51.273434 | orchestrator | testbed-node-2 : ok=5  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:48:51.607460 | orchestrator | 2025-07-12 20:48:51.607566 | orchestrator | 2025-07-12 20:48:51.607582 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:48:51.607596 | orchestrator | Saturday 12 July 2025 20:48:51 +0000 (0:00:00.408) 0:00:16.250 ********* 2025-07-12 20:48:51.607607 | orchestrator | =============================================================================== 2025-07-12 20:48:51.607618 | orchestrator | Gather list of mgr modules ---------------------------------------------- 1.89s 2025-07-12 20:48:51.607629 | orchestrator | Write report file ------------------------------------------------------- 1.79s 2025-07-12 20:48:51.607640 | orchestrator | Get timestamp for report file ------------------------------------------- 1.67s 2025-07-12 20:48:51.607675 | orchestrator | Aggregate test results step one ----------------------------------------- 1.27s 2025-07-12 20:48:51.607687 | orchestrator | Get container info ------------------------------------------------------ 0.99s 2025-07-12 20:48:51.607699 | orchestrator | Extract list of enabled mgr modules ------------------------------------- 0.88s 2025-07-12 20:48:51.607710 | orchestrator | Create report output directory ------------------------------------------ 0.86s 2025-07-12 20:48:51.607721 | orchestrator | Aggregate test results step one ----------------------------------------- 0.65s 2025-07-12 20:48:51.607732 | orchestrator | Set test result to passed if container is existing ---------------------- 0.50s 2025-07-12 20:48:51.607743 | orchestrator | Print report file information ------------------------------------------- 0.41s 2025-07-12 20:48:51.607754 | orchestrator | Set test result to passed if ceph-mgr is running ------------------------ 0.31s 2025-07-12 20:48:51.607764 | orchestrator | Prepare test data ------------------------------------------------------- 0.30s 2025-07-12 20:48:51.607797 | orchestrator | Prepare test data for container existance test -------------------------- 0.30s 2025-07-12 20:48:51.607809 | orchestrator | Set test result to failed if container is missing ----------------------- 0.29s 2025-07-12 20:48:51.607820 | orchestrator | Set test result to failed if ceph-mgr is not running -------------------- 0.28s 2025-07-12 20:48:51.607836 | orchestrator | Aggregate test results step three --------------------------------------- 0.28s 2025-07-12 20:48:51.607847 | orchestrator | Aggregate test results step two ----------------------------------------- 0.26s 2025-07-12 20:48:51.607858 | orchestrator | Parse mgr module list from json ----------------------------------------- 0.26s 2025-07-12 20:48:51.607869 | orchestrator | Define report vars ------------------------------------------------------ 0.26s 2025-07-12 20:48:51.607880 | orchestrator | Print report file information ------------------------------------------- 0.25s 2025-07-12 20:48:51.899916 | orchestrator | + osism validate ceph-osds 2025-07-12 20:49:13.475271 | orchestrator | 2025-07-12 20:49:13.475515 | orchestrator | PLAY [Ceph validate OSDs] ****************************************************** 2025-07-12 20:49:13.475540 | orchestrator | 2025-07-12 20:49:13.475553 | orchestrator | TASK [Get timestamp for report file] ******************************************* 2025-07-12 20:49:13.475563 | orchestrator | Saturday 12 July 2025 20:49:08 +0000 (0:00:00.437) 0:00:00.437 ********* 2025-07-12 20:49:13.475575 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:49:13.475586 | orchestrator | 2025-07-12 20:49:13.475595 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-07-12 20:49:13.475604 | orchestrator | Saturday 12 July 2025 20:49:09 +0000 (0:00:01.683) 0:00:02.120 ********* 2025-07-12 20:49:13.475613 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:49:13.475622 | orchestrator | 2025-07-12 20:49:13.475631 | orchestrator | TASK [Create report output directory] ****************************************** 2025-07-12 20:49:13.475641 | orchestrator | Saturday 12 July 2025 20:49:10 +0000 (0:00:00.241) 0:00:02.362 ********* 2025-07-12 20:49:13.475651 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:49:13.475659 | orchestrator | 2025-07-12 20:49:13.475670 | orchestrator | TASK [Define report vars] ****************************************************** 2025-07-12 20:49:13.475680 | orchestrator | Saturday 12 July 2025 20:49:11 +0000 (0:00:01.020) 0:00:03.383 ********* 2025-07-12 20:49:13.475690 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:13.475700 | orchestrator | 2025-07-12 20:49:13.475709 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-12 20:49:13.475718 | orchestrator | Saturday 12 July 2025 20:49:11 +0000 (0:00:00.136) 0:00:03.520 ********* 2025-07-12 20:49:13.475727 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:13.475736 | orchestrator | 2025-07-12 20:49:13.475746 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-12 20:49:13.475756 | orchestrator | Saturday 12 July 2025 20:49:11 +0000 (0:00:00.133) 0:00:03.653 ********* 2025-07-12 20:49:13.475766 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:13.475776 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:49:13.475786 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:49:13.475795 | orchestrator | 2025-07-12 20:49:13.475806 | orchestrator | TASK [Define OSD test variables] *********************************************** 2025-07-12 20:49:13.475816 | orchestrator | Saturday 12 July 2025 20:49:11 +0000 (0:00:00.326) 0:00:03.980 ********* 2025-07-12 20:49:13.475827 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:13.475838 | orchestrator | 2025-07-12 20:49:13.475849 | orchestrator | TASK [Calculate OSD devices for each host] ************************************* 2025-07-12 20:49:13.475859 | orchestrator | Saturday 12 July 2025 20:49:11 +0000 (0:00:00.153) 0:00:04.134 ********* 2025-07-12 20:49:13.475870 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:13.475880 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:13.475891 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:13.475901 | orchestrator | 2025-07-12 20:49:13.475910 | orchestrator | TASK [Calculate total number of OSDs in cluster] ******************************* 2025-07-12 20:49:13.475921 | orchestrator | Saturday 12 July 2025 20:49:12 +0000 (0:00:00.363) 0:00:04.498 ********* 2025-07-12 20:49:13.475959 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:13.475970 | orchestrator | 2025-07-12 20:49:13.475980 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:49:13.475991 | orchestrator | Saturday 12 July 2025 20:49:12 +0000 (0:00:00.524) 0:00:05.022 ********* 2025-07-12 20:49:13.476002 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:13.476012 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:13.476023 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:13.476034 | orchestrator | 2025-07-12 20:49:13.476045 | orchestrator | TASK [Get list of ceph-osd containers on host] ********************************* 2025-07-12 20:49:13.476109 | orchestrator | Saturday 12 July 2025 20:49:13 +0000 (0:00:00.471) 0:00:05.494 ********* 2025-07-12 20:49:13.476123 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2eda14923b799c7e247f8c04fc2a93541633dae33ca565fbf58d7d93088accaa', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:49:13.476137 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e48e695bdd7041736f55ddc8ff506483c13c844b0e9f720f294995c7ca84a262', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:49:13.476148 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'e070a79e24a7f75d3d4049ece20e8c908e589118bca47334980db5f93f9606df', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:49:13.476161 | orchestrator | skipping: [testbed-node-3] => (item={'id': '5c8b79d6aee79e98960c3cdd470d224081a137d774b03c204b6dd57cedcb2c06', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 20:49:13.476188 | orchestrator | skipping: [testbed-node-3] => (item={'id': '47dd1e9f30c085d7680c170414ef6f2c6f6e92bb45978419bb669cfcb4c62a7b', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-12 20:49:13.476220 | orchestrator | skipping: [testbed-node-3] => (item={'id': '57406585870a416a9ab5408c1fcb40115e09b3d501ba12356720e85a28234035', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-12 20:49:13.476230 | orchestrator | skipping: [testbed-node-3] => (item={'id': '2e89f243c892bc8f7fdfda7a0b3d6957039ab57f9a045268a4ac97ddb23ccbb9', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 20:49:13.476248 | orchestrator | skipping: [testbed-node-3] => (item={'id': '762153dd812d3afe113fba75f8b9be718c9dd8a4da15af8ab5fbec961baa7093', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-12 20:49:13.476259 | orchestrator | skipping: [testbed-node-3] => (item={'id': '0338fc77c72f18055eda2bac0e5fe66ac1b19dbf37bffbd554ed6115d57405ea', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-12 20:49:13.476269 | orchestrator | skipping: [testbed-node-3] => (item={'id': '54bfba7a6d6b9634dd7cf92c52dceefa3fd91a77f0f368961157e4886ebce5c0', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-3-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 20:49:13.476280 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c2952e967de19d0be004dbcdc4a9204da099e8d96edfdb299972418d60626e6c', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-3', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 20:49:13.476301 | orchestrator | skipping: [testbed-node-3] => (item={'id': '3124c689cd4f02cdb7b612500d5ee30866f313fc43a974c1e688fa227b4bf2ec', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-3', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-12 20:49:13.476314 | orchestrator | ok: [testbed-node-3] => (item={'id': '636164411a4566b6aa2d97a22653b6824132bd393e8263a328059435011c8444', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-0', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 20:49:13.476325 | orchestrator | ok: [testbed-node-3] => (item={'id': '8aa229cf213460acf8a5db45d989033d91a5a7abc572d49457fb144a24b427a8', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-5', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 20:49:13.476336 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1101a5ff6c104714bf961e89bf21cc4980c2b94a784294ee03263b362c9c1f34', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-12 20:49:13.476346 | orchestrator | skipping: [testbed-node-3] => (item={'id': '1d63730af8702790d660284483fee290f5e96cf8a33f832f31ecc0210846dc8f', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 20:49:13.476357 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'ad0f8a72885607374126388addf79ecf43309c07ec75f6f35d301c368dd81912', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 20:49:13.476387 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'c99bb35d7df211eca89661518764aba66510fc5cbed27e4a54171506316e7105', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 20:49:13.476399 | orchestrator | skipping: [testbed-node-3] => (item={'id': '8081c6f19965b1ab96b9cbeab6567cfdaefc3ef869959f91dd30b8f0c08b6a05', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 20:49:13.476409 | orchestrator | skipping: [testbed-node-3] => (item={'id': 'd50b3cd6a07a5fec55a1ccd727879056273f3c57d98369ad168f31501822815f', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-12 20:49:13.476426 | orchestrator | skipping: [testbed-node-4] => (item={'id': '322b37bab173a8214bcc890b6fe323038d26b886ba0ad2d98152becbeb8c6616', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:49:13.616708 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'd0704b83077b921dfaf0ec1b06002f3eca95d3f9fbb9ef50d4e03a8416347609', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:49:13.616821 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ca261a129635f921276b9c65e1bbdc3668d46bc05d58cb0f7b67080893789051', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:49:13.616842 | orchestrator | skipping: [testbed-node-4] => (item={'id': '246bd0a7a2d3a5ae9b3bc75588952fe2dac11a71ee6cb03e46a074a818d88e1a', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 20:49:13.616885 | orchestrator | skipping: [testbed-node-4] => (item={'id': '5418d73ee8eb3946604fccbd7b40eac27107e5c032ed2e3faf2b09f9f530dcb9', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-12 20:49:13.616901 | orchestrator | skipping: [testbed-node-4] => (item={'id': '49ccfbc229904bd8b51ad220d2faa8ffe775b2da611bf07a32b4757ab149ddb5', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-12 20:49:13.616916 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'b408e44695462d0dff30eedd8a35f09d579a1b26de5e51efb8216a4032bdd660', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 20:49:13.616932 | orchestrator | skipping: [testbed-node-4] => (item={'id': '3c00dea8d29bccc16bda90f7f3ca8fcf94d47ba6249c2d01039a549f3e541358', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-12 20:49:13.616948 | orchestrator | skipping: [testbed-node-4] => (item={'id': '08ff49ff0755306d6cb7e7c425342ce58f3975820d02269d866bbdd5ed8503e3', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-12 20:49:13.616963 | orchestrator | skipping: [testbed-node-4] => (item={'id': '9e5a853384c71e6d53097a31b2020a365494911f71193bc5201c85130ebaafb3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-4-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 20:49:13.616979 | orchestrator | skipping: [testbed-node-4] => (item={'id': '323b41fa8430ee3192e6a427608b0a61f7bb25875e748f9809fe1a9f0ff8d1f3', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-4', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 20:49:13.616994 | orchestrator | skipping: [testbed-node-4] => (item={'id': '44cb3b4696039e9c47e493b0282e16254184b0d148c0d18636cf6789a295918f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-4', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-12 20:49:13.617029 | orchestrator | ok: [testbed-node-4] => (item={'id': '4725569b826fd8fe6aa7f748c5be61db1526ebb8e1b360d6b3e2388ecb96587f', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-1', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 20:49:13.617050 | orchestrator | ok: [testbed-node-4] => (item={'id': '14702d2573222ff81de7322963d3473a676ac35c412e0634f9eae16600ae9961', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-3', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 20:49:13.617066 | orchestrator | skipping: [testbed-node-4] => (item={'id': '47b3700340c50745eb66d063f36f3140571e1ebb2be2d189230b4fa9d65932d5', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-12 20:49:13.617101 | orchestrator | skipping: [testbed-node-4] => (item={'id': '8bf8671ebdc459e2ff88f8a6b4f000851cd7061ff3fd40101b665c854ba826bf', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 20:49:13.617117 | orchestrator | skipping: [testbed-node-4] => (item={'id': '015170c4b8e7942ba4e00bbecdf80a76fad7c6c32f1c4d31a095a0d9faef7c26', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 20:49:13.617133 | orchestrator | skipping: [testbed-node-4] => (item={'id': '13fbbc2a05fa0a81aa276a4e73c318323637955ceeb2c2f40a3ee87bbc232dcb', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 20:49:13.617157 | orchestrator | skipping: [testbed-node-4] => (item={'id': 'ac9513f54a06367af1d26c2da7358682df6d6392aa8a24d9345c304128c6a4ed', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 20:49:13.617172 | orchestrator | skipping: [testbed-node-4] => (item={'id': '51c1a3e28cbda61f000905133e29af386a92d3a13ea5a515e71ca84437433da3', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-12 20:49:13.617188 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'd03eeca3b9d93d5529ef80f3bf4afc4be1cde579a8f60388872270e61e9e7ea6', 'image': 'registry.osism.tech/kolla/release/nova-compute:30.0.1.20250711', 'name': '/nova_compute', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:49:13.617203 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ab7b7b2c23fceda2cd30f1afa0030a51f7938fb141c4efb1da849f69f034fc0c', 'image': 'registry.osism.tech/kolla/release/nova-libvirt:9.0.0.20250711', 'name': '/nova_libvirt', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:49:13.617218 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c3fc7751b61c193482af1695e618045d3f4edf48c1d66f89023738e41f7c4c80', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:25.2.1.20250711', 'name': '/neutron_ovn_metadata_agent', 'state': 'running', 'status': 'Up 9 minutes (healthy)'})  2025-07-12 20:49:13.617235 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'dfd9b0d75be66e3c7697cd9505c61cbc2582939de07e13b2780723d3603f6d15', 'image': 'registry.osism.tech/kolla/release/nova-ssh:30.0.1.20250711', 'name': '/nova_ssh', 'state': 'running', 'status': 'Up 10 minutes (healthy)'})  2025-07-12 20:49:13.617252 | orchestrator | skipping: [testbed-node-5] => (item={'id': '78c954a8650fba06955e0ac0b887605b356355385663ff527e2ab1fd28032bd2', 'image': 'registry.osism.tech/kolla/release/cinder-backup:25.2.1.20250711', 'name': '/cinder_backup', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-12 20:49:13.617269 | orchestrator | skipping: [testbed-node-5] => (item={'id': '4fa28807df3caf69399b086c0f42e857f3054826801332e9b07f551b709aefb8', 'image': 'registry.osism.tech/kolla/release/cinder-volume:25.2.1.20250711', 'name': '/cinder_volume', 'state': 'running', 'status': 'Up 14 minutes (healthy)'})  2025-07-12 20:49:13.617285 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'af6b1535e0ebc837ff2a5a06c2f6f808954d06e3e08549478a3b2be659c54b40', 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:0.20250711.0.20250711', 'name': '/prometheus_libvirt_exporter', 'state': 'running', 'status': 'Up 14 minutes'})  2025-07-12 20:49:13.617302 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'ff1ecd565df404b20c05476dc263d8a263d4662f5f5ef25ac1bda5c106648c00', 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.2.20250711', 'name': '/prometheus_cadvisor', 'state': 'running', 'status': 'Up 15 minutes'})  2025-07-12 20:49:13.617325 | orchestrator | skipping: [testbed-node-5] => (item={'id': '16431b56203a25bb76999248ae794c3044e061e48cdaa863052a225641be4d5d', 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.8.2.20250711', 'name': '/prometheus_node_exporter', 'state': 'running', 'status': 'Up 16 minutes'})  2025-07-12 20:49:13.617343 | orchestrator | skipping: [testbed-node-5] => (item={'id': '95d1414586fe9b33facef78127ac45c4e9d6db6e0d2cc259227c290582c78801', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-rgw-default-testbed-node-5-rgw0', 'state': 'running', 'status': 'Up 22 minutes'})  2025-07-12 20:49:13.617438 | orchestrator | skipping: [testbed-node-5] => (item={'id': '11d6ddbcdca71c9ec6ea6fcf479a5739deeae528076684f1a5d708afb249c791', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-mds-testbed-node-5', 'state': 'running', 'status': 'Up 23 minutes'})  2025-07-12 20:49:21.283890 | orchestrator | skipping: [testbed-node-5] => (item={'id': '7ac1d1148d4516187eaefef8f42fb22c432c3b1cec73cebd40cf9f1c96d00800', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-crash-testbed-node-5', 'state': 'running', 'status': 'Up 24 minutes'})  2025-07-12 20:49:21.284010 | orchestrator | ok: [testbed-node-5] => (item={'id': '09eaa543c68f8006de1e1d459ac26f40a50ecdcad3b2849c8a75313945bffa6b', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-2', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 20:49:21.284027 | orchestrator | ok: [testbed-node-5] => (item={'id': '7661c0cbdee15ae06559f6848ae768c9b66c9d377dc047e0ed84ecbc931a05ff', 'image': 'registry.osism.tech/osism/ceph-daemon:18.2.7', 'name': '/ceph-osd-4', 'state': 'running', 'status': 'Up 25 minutes'}) 2025-07-12 20:49:21.284040 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'beec9a3815274490e0dc2031ce59ec8765e5a45d28d7250471515d1adf832a97', 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.9.2.20250711', 'name': '/ovn_controller', 'state': 'running', 'status': 'Up 28 minutes'})  2025-07-12 20:49:21.284054 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8fd617fe55028d8944744ffc9c13c06ee6b49f628c237032aab6fd61fab73c21', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.4.2.20250711', 'name': '/openvswitch_vswitchd', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 20:49:21.284085 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'c02f2fe025bb3cd502bdb50c949e39ce2845c7277b20495475dc8c444d28466b', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.4.2.20250711', 'name': '/openvswitch_db', 'state': 'running', 'status': 'Up 30 minutes (healthy)'})  2025-07-12 20:49:21.284099 | orchestrator | skipping: [testbed-node-5] => (item={'id': '8d6c3f0ddfd7a886b943bf91ae97fe06654a38fdee110f4c9ac1dc30b41b9a82', 'image': 'registry.osism.tech/kolla/release/cron:3.0.20250711', 'name': '/cron', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 20:49:21.284111 | orchestrator | skipping: [testbed-node-5] => (item={'id': '6b9052a8d1f7ff7de0705b3a55de413408d91ee65f2a6fd67b124e4ac1a37532', 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:19.5.1.20250711', 'name': '/kolla_toolbox', 'state': 'running', 'status': 'Up 31 minutes'})  2025-07-12 20:49:21.284123 | orchestrator | skipping: [testbed-node-5] => (item={'id': 'cd7810500ef7bb0053f72ff8f1f50606847dbefdc71333f74b176e30fdb7fefa', 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.7.20250711', 'name': '/fluentd', 'state': 'running', 'status': 'Up 32 minutes'})  2025-07-12 20:49:21.284135 | orchestrator | 2025-07-12 20:49:21.284148 | orchestrator | TASK [Get count of ceph-osd containers on host] ******************************** 2025-07-12 20:49:21.284161 | orchestrator | Saturday 12 July 2025 20:49:13 +0000 (0:00:00.495) 0:00:05.990 ********* 2025-07-12 20:49:21.284173 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:21.284185 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:21.284196 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:21.284208 | orchestrator | 2025-07-12 20:49:21.284220 | orchestrator | TASK [Set test result to failed when count of containers is wrong] ************* 2025-07-12 20:49:21.284231 | orchestrator | Saturday 12 July 2025 20:49:14 +0000 (0:00:00.305) 0:00:06.296 ********* 2025-07-12 20:49:21.284244 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:21.284257 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:49:21.284269 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:49:21.284281 | orchestrator | 2025-07-12 20:49:21.284293 | orchestrator | TASK [Set test result to passed if count matches] ****************************** 2025-07-12 20:49:21.284304 | orchestrator | Saturday 12 July 2025 20:49:14 +0000 (0:00:00.353) 0:00:06.649 ********* 2025-07-12 20:49:21.284316 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:21.284329 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:21.284341 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:21.284451 | orchestrator | 2025-07-12 20:49:21.284467 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:49:21.284481 | orchestrator | Saturday 12 July 2025 20:49:14 +0000 (0:00:00.484) 0:00:07.134 ********* 2025-07-12 20:49:21.284512 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:21.284526 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:21.284539 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:21.284554 | orchestrator | 2025-07-12 20:49:21.284568 | orchestrator | TASK [Get list of ceph-osd containers that are not running] ******************** 2025-07-12 20:49:21.284583 | orchestrator | Saturday 12 July 2025 20:49:15 +0000 (0:00:00.310) 0:00:07.444 ********* 2025-07-12 20:49:21.284608 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-0', 'osd_id': '0', 'state': 'running'})  2025-07-12 20:49:21.284621 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'ceph-osd-5', 'osd_id': '5', 'state': 'running'})  2025-07-12 20:49:21.284642 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:21.284654 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-1', 'osd_id': '1', 'state': 'running'})  2025-07-12 20:49:21.284665 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'ceph-osd-3', 'osd_id': '3', 'state': 'running'})  2025-07-12 20:49:21.284695 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:49:21.284707 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-2', 'osd_id': '2', 'state': 'running'})  2025-07-12 20:49:21.284718 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'ceph-osd-4', 'osd_id': '4', 'state': 'running'})  2025-07-12 20:49:21.284729 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:49:21.284741 | orchestrator | 2025-07-12 20:49:21.284751 | orchestrator | TASK [Get count of ceph-osd containers that are not running] ******************* 2025-07-12 20:49:21.284763 | orchestrator | Saturday 12 July 2025 20:49:15 +0000 (0:00:00.310) 0:00:07.755 ********* 2025-07-12 20:49:21.284774 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:21.284786 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:21.284797 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:21.284808 | orchestrator | 2025-07-12 20:49:21.284819 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-12 20:49:21.284831 | orchestrator | Saturday 12 July 2025 20:49:15 +0000 (0:00:00.312) 0:00:08.067 ********* 2025-07-12 20:49:21.284842 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:21.284854 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:49:21.284864 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:49:21.284874 | orchestrator | 2025-07-12 20:49:21.284885 | orchestrator | TASK [Set test result to failed if an OSD is not running] ********************** 2025-07-12 20:49:21.284895 | orchestrator | Saturday 12 July 2025 20:49:16 +0000 (0:00:00.513) 0:00:08.581 ********* 2025-07-12 20:49:21.284906 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:21.284916 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:49:21.284927 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:49:21.284938 | orchestrator | 2025-07-12 20:49:21.284949 | orchestrator | TASK [Set test result to passed if all containers are running] ***************** 2025-07-12 20:49:21.284961 | orchestrator | Saturday 12 July 2025 20:49:16 +0000 (0:00:00.315) 0:00:08.896 ********* 2025-07-12 20:49:21.284971 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:21.284982 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:21.285007 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:21.285019 | orchestrator | 2025-07-12 20:49:21.285026 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:49:21.285032 | orchestrator | Saturday 12 July 2025 20:49:16 +0000 (0:00:00.305) 0:00:09.202 ********* 2025-07-12 20:49:21.285039 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:21.285046 | orchestrator | 2025-07-12 20:49:21.285052 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:49:21.285059 | orchestrator | Saturday 12 July 2025 20:49:17 +0000 (0:00:00.243) 0:00:09.446 ********* 2025-07-12 20:49:21.285066 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:21.285083 | orchestrator | 2025-07-12 20:49:21.285090 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:49:21.285096 | orchestrator | Saturday 12 July 2025 20:49:17 +0000 (0:00:00.233) 0:00:09.679 ********* 2025-07-12 20:49:21.285103 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:21.285110 | orchestrator | 2025-07-12 20:49:21.285117 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:49:21.285124 | orchestrator | Saturday 12 July 2025 20:49:17 +0000 (0:00:00.239) 0:00:09.919 ********* 2025-07-12 20:49:21.285131 | orchestrator | 2025-07-12 20:49:21.285137 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:49:21.285144 | orchestrator | Saturday 12 July 2025 20:49:17 +0000 (0:00:00.069) 0:00:09.989 ********* 2025-07-12 20:49:21.285151 | orchestrator | 2025-07-12 20:49:21.285157 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:49:21.285164 | orchestrator | Saturday 12 July 2025 20:49:17 +0000 (0:00:00.249) 0:00:10.238 ********* 2025-07-12 20:49:21.285171 | orchestrator | 2025-07-12 20:49:21.285177 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:49:21.285184 | orchestrator | Saturday 12 July 2025 20:49:18 +0000 (0:00:00.068) 0:00:10.307 ********* 2025-07-12 20:49:21.285191 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:21.285197 | orchestrator | 2025-07-12 20:49:21.285204 | orchestrator | TASK [Fail early due to containers not running] ******************************** 2025-07-12 20:49:21.285211 | orchestrator | Saturday 12 July 2025 20:49:18 +0000 (0:00:00.253) 0:00:10.561 ********* 2025-07-12 20:49:21.285218 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:21.285224 | orchestrator | 2025-07-12 20:49:21.285231 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:49:21.285238 | orchestrator | Saturday 12 July 2025 20:49:18 +0000 (0:00:00.256) 0:00:10.818 ********* 2025-07-12 20:49:21.285244 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:21.285251 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:21.285258 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:21.285265 | orchestrator | 2025-07-12 20:49:21.285271 | orchestrator | TASK [Set _mon_hostname fact] ************************************************** 2025-07-12 20:49:21.285278 | orchestrator | Saturday 12 July 2025 20:49:18 +0000 (0:00:00.310) 0:00:11.129 ********* 2025-07-12 20:49:21.285285 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:21.285291 | orchestrator | 2025-07-12 20:49:21.285298 | orchestrator | TASK [Get ceph osd tree] ******************************************************* 2025-07-12 20:49:21.285305 | orchestrator | Saturday 12 July 2025 20:49:19 +0000 (0:00:00.231) 0:00:11.360 ********* 2025-07-12 20:49:21.285312 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-07-12 20:49:21.285319 | orchestrator | 2025-07-12 20:49:21.285326 | orchestrator | TASK [Parse osd tree from JSON] ************************************************ 2025-07-12 20:49:21.285333 | orchestrator | Saturday 12 July 2025 20:49:20 +0000 (0:00:01.613) 0:00:12.974 ********* 2025-07-12 20:49:21.285339 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:21.285346 | orchestrator | 2025-07-12 20:49:21.285353 | orchestrator | TASK [Get OSDs that are not up or in] ****************************************** 2025-07-12 20:49:21.285360 | orchestrator | Saturday 12 July 2025 20:49:20 +0000 (0:00:00.148) 0:00:13.123 ********* 2025-07-12 20:49:21.285383 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:21.285395 | orchestrator | 2025-07-12 20:49:21.285404 | orchestrator | TASK [Fail test if OSDs are not up or in] ************************************** 2025-07-12 20:49:21.285411 | orchestrator | Saturday 12 July 2025 20:49:21 +0000 (0:00:00.313) 0:00:13.437 ********* 2025-07-12 20:49:21.285424 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:34.548976 | orchestrator | 2025-07-12 20:49:34.549103 | orchestrator | TASK [Pass test if OSDs are all up and in] ************************************* 2025-07-12 20:49:34.549122 | orchestrator | Saturday 12 July 2025 20:49:21 +0000 (0:00:00.134) 0:00:13.571 ********* 2025-07-12 20:49:34.549140 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:34.549160 | orchestrator | 2025-07-12 20:49:34.549176 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:49:34.549225 | orchestrator | Saturday 12 July 2025 20:49:21 +0000 (0:00:00.345) 0:00:13.916 ********* 2025-07-12 20:49:34.549242 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:34.549258 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:34.549275 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:34.549292 | orchestrator | 2025-07-12 20:49:34.549309 | orchestrator | TASK [List ceph LVM volumes and collect data] ********************************** 2025-07-12 20:49:34.549327 | orchestrator | Saturday 12 July 2025 20:49:21 +0000 (0:00:00.318) 0:00:14.235 ********* 2025-07-12 20:49:34.549347 | orchestrator | changed: [testbed-node-3] 2025-07-12 20:49:34.549367 | orchestrator | changed: [testbed-node-4] 2025-07-12 20:49:34.549447 | orchestrator | changed: [testbed-node-5] 2025-07-12 20:49:34.549459 | orchestrator | 2025-07-12 20:49:34.549470 | orchestrator | TASK [Parse LVM data as JSON] ************************************************** 2025-07-12 20:49:34.549481 | orchestrator | Saturday 12 July 2025 20:49:24 +0000 (0:00:02.318) 0:00:16.553 ********* 2025-07-12 20:49:34.549492 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:34.549504 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:34.549517 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:34.549528 | orchestrator | 2025-07-12 20:49:34.549540 | orchestrator | TASK [Get unencrypted and encrypted OSDs] ************************************** 2025-07-12 20:49:34.549553 | orchestrator | Saturday 12 July 2025 20:49:24 +0000 (0:00:00.354) 0:00:16.907 ********* 2025-07-12 20:49:34.549565 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:34.549578 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:34.549640 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:34.549654 | orchestrator | 2025-07-12 20:49:34.549666 | orchestrator | TASK [Fail if count of encrypted OSDs does not match] ************************** 2025-07-12 20:49:34.549679 | orchestrator | Saturday 12 July 2025 20:49:25 +0000 (0:00:00.724) 0:00:17.632 ********* 2025-07-12 20:49:34.549691 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:34.549702 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:49:34.549713 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:49:34.549724 | orchestrator | 2025-07-12 20:49:34.549734 | orchestrator | TASK [Pass if count of encrypted OSDs equals count of OSDs] ******************** 2025-07-12 20:49:34.549745 | orchestrator | Saturday 12 July 2025 20:49:25 +0000 (0:00:00.329) 0:00:17.961 ********* 2025-07-12 20:49:34.549756 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:34.549766 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:34.549777 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:34.549788 | orchestrator | 2025-07-12 20:49:34.549800 | orchestrator | TASK [Fail if count of unencrypted OSDs does not match] ************************ 2025-07-12 20:49:34.549811 | orchestrator | Saturday 12 July 2025 20:49:25 +0000 (0:00:00.325) 0:00:18.287 ********* 2025-07-12 20:49:34.549822 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:34.549832 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:49:34.549843 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:49:34.549854 | orchestrator | 2025-07-12 20:49:34.549865 | orchestrator | TASK [Pass if count of unencrypted OSDs equals count of OSDs] ****************** 2025-07-12 20:49:34.549876 | orchestrator | Saturday 12 July 2025 20:49:26 +0000 (0:00:00.313) 0:00:18.601 ********* 2025-07-12 20:49:34.549887 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:34.549897 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:49:34.549908 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:49:34.549919 | orchestrator | 2025-07-12 20:49:34.549930 | orchestrator | TASK [Prepare test data] ******************************************************* 2025-07-12 20:49:34.549940 | orchestrator | Saturday 12 July 2025 20:49:26 +0000 (0:00:00.526) 0:00:19.128 ********* 2025-07-12 20:49:34.549951 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:34.549962 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:34.549972 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:34.549983 | orchestrator | 2025-07-12 20:49:34.549994 | orchestrator | TASK [Get CRUSH node data of each OSD host and root node childs] *************** 2025-07-12 20:49:34.550005 | orchestrator | Saturday 12 July 2025 20:49:27 +0000 (0:00:00.543) 0:00:19.671 ********* 2025-07-12 20:49:34.550105 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:34.550118 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:34.550129 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:34.550140 | orchestrator | 2025-07-12 20:49:34.550151 | orchestrator | TASK [Calculate sub test expression results] *********************************** 2025-07-12 20:49:34.550162 | orchestrator | Saturday 12 July 2025 20:49:27 +0000 (0:00:00.574) 0:00:20.246 ********* 2025-07-12 20:49:34.550173 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:34.550184 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:34.550195 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:34.550206 | orchestrator | 2025-07-12 20:49:34.550217 | orchestrator | TASK [Fail test if any sub test failed] **************************************** 2025-07-12 20:49:34.550228 | orchestrator | Saturday 12 July 2025 20:49:28 +0000 (0:00:00.321) 0:00:20.568 ********* 2025-07-12 20:49:34.550239 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:34.550255 | orchestrator | skipping: [testbed-node-4] 2025-07-12 20:49:34.550266 | orchestrator | skipping: [testbed-node-5] 2025-07-12 20:49:34.550277 | orchestrator | 2025-07-12 20:49:34.550288 | orchestrator | TASK [Pass test if no sub test failed] ***************************************** 2025-07-12 20:49:34.550299 | orchestrator | Saturday 12 July 2025 20:49:28 +0000 (0:00:00.312) 0:00:20.881 ********* 2025-07-12 20:49:34.550310 | orchestrator | ok: [testbed-node-3] 2025-07-12 20:49:34.550321 | orchestrator | ok: [testbed-node-4] 2025-07-12 20:49:34.550331 | orchestrator | ok: [testbed-node-5] 2025-07-12 20:49:34.550342 | orchestrator | 2025-07-12 20:49:34.550353 | orchestrator | TASK [Set validation result to passed if no test failed] *********************** 2025-07-12 20:49:34.550364 | orchestrator | Saturday 12 July 2025 20:49:29 +0000 (0:00:00.557) 0:00:21.438 ********* 2025-07-12 20:49:34.550451 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:49:34.550466 | orchestrator | 2025-07-12 20:49:34.550477 | orchestrator | TASK [Set validation result to failed if a test failed] ************************ 2025-07-12 20:49:34.550488 | orchestrator | Saturday 12 July 2025 20:49:29 +0000 (0:00:00.255) 0:00:21.693 ********* 2025-07-12 20:49:34.550499 | orchestrator | skipping: [testbed-node-3] 2025-07-12 20:49:34.550510 | orchestrator | 2025-07-12 20:49:34.550545 | orchestrator | TASK [Aggregate test results step one] ***************************************** 2025-07-12 20:49:34.550557 | orchestrator | Saturday 12 July 2025 20:49:29 +0000 (0:00:00.264) 0:00:21.958 ********* 2025-07-12 20:49:34.550568 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:49:34.550579 | orchestrator | 2025-07-12 20:49:34.550590 | orchestrator | TASK [Aggregate test results step two] ***************************************** 2025-07-12 20:49:34.550601 | orchestrator | Saturday 12 July 2025 20:49:31 +0000 (0:00:01.733) 0:00:23.692 ********* 2025-07-12 20:49:34.550612 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:49:34.550622 | orchestrator | 2025-07-12 20:49:34.550633 | orchestrator | TASK [Aggregate test results step three] *************************************** 2025-07-12 20:49:34.550644 | orchestrator | Saturday 12 July 2025 20:49:31 +0000 (0:00:00.263) 0:00:23.955 ********* 2025-07-12 20:49:34.550655 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:49:34.550665 | orchestrator | 2025-07-12 20:49:34.550676 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:49:34.550687 | orchestrator | Saturday 12 July 2025 20:49:31 +0000 (0:00:00.255) 0:00:24.211 ********* 2025-07-12 20:49:34.550697 | orchestrator | 2025-07-12 20:49:34.550707 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:49:34.550716 | orchestrator | Saturday 12 July 2025 20:49:32 +0000 (0:00:00.088) 0:00:24.299 ********* 2025-07-12 20:49:34.550725 | orchestrator | 2025-07-12 20:49:34.550735 | orchestrator | TASK [Flush handlers] ********************************************************** 2025-07-12 20:49:34.550745 | orchestrator | Saturday 12 July 2025 20:49:32 +0000 (0:00:00.067) 0:00:24.367 ********* 2025-07-12 20:49:34.550754 | orchestrator | 2025-07-12 20:49:34.550764 | orchestrator | RUNNING HANDLER [Write report file] ******************************************** 2025-07-12 20:49:34.550781 | orchestrator | Saturday 12 July 2025 20:49:32 +0000 (0:00:00.072) 0:00:24.440 ********* 2025-07-12 20:49:34.550791 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-07-12 20:49:34.550801 | orchestrator | 2025-07-12 20:49:34.550810 | orchestrator | TASK [Print report file information] ******************************************* 2025-07-12 20:49:34.550820 | orchestrator | Saturday 12 July 2025 20:49:33 +0000 (0:00:01.517) 0:00:25.957 ********* 2025-07-12 20:49:34.550829 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => { 2025-07-12 20:49:34.550839 | orchestrator |  "msg": [ 2025-07-12 20:49:34.550849 | orchestrator |  "Validator run completed.", 2025-07-12 20:49:34.550858 | orchestrator |  "You can find the report file here:", 2025-07-12 20:49:34.550868 | orchestrator |  "/opt/reports/validator/ceph-osds-validator-2025-07-12T20:49:08+00:00-report.json", 2025-07-12 20:49:34.550879 | orchestrator |  "on the following host:", 2025-07-12 20:49:34.550888 | orchestrator |  "testbed-manager" 2025-07-12 20:49:34.550898 | orchestrator |  ] 2025-07-12 20:49:34.550908 | orchestrator | } 2025-07-12 20:49:34.550918 | orchestrator | 2025-07-12 20:49:34.550927 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:49:34.550938 | orchestrator | testbed-node-3 : ok=35  changed=4  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2025-07-12 20:49:34.550949 | orchestrator | testbed-node-4 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 20:49:34.550959 | orchestrator | testbed-node-5 : ok=18  changed=1  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-07-12 20:49:34.550968 | orchestrator | 2025-07-12 20:49:34.550978 | orchestrator | 2025-07-12 20:49:34.550988 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:49:34.550997 | orchestrator | Saturday 12 July 2025 20:49:34 +0000 (0:00:00.853) 0:00:26.811 ********* 2025-07-12 20:49:34.551007 | orchestrator | =============================================================================== 2025-07-12 20:49:34.551016 | orchestrator | List ceph LVM volumes and collect data ---------------------------------- 2.32s 2025-07-12 20:49:34.551026 | orchestrator | Aggregate test results step one ----------------------------------------- 1.73s 2025-07-12 20:49:34.551035 | orchestrator | Get timestamp for report file ------------------------------------------- 1.68s 2025-07-12 20:49:34.551045 | orchestrator | Get ceph osd tree ------------------------------------------------------- 1.61s 2025-07-12 20:49:34.551054 | orchestrator | Write report file ------------------------------------------------------- 1.52s 2025-07-12 20:49:34.551064 | orchestrator | Create report output directory ------------------------------------------ 1.02s 2025-07-12 20:49:34.551078 | orchestrator | Print report file information ------------------------------------------- 0.85s 2025-07-12 20:49:34.551088 | orchestrator | Get unencrypted and encrypted OSDs -------------------------------------- 0.72s 2025-07-12 20:49:34.551098 | orchestrator | Get CRUSH node data of each OSD host and root node childs --------------- 0.57s 2025-07-12 20:49:34.551107 | orchestrator | Pass test if no sub test failed ----------------------------------------- 0.56s 2025-07-12 20:49:34.551117 | orchestrator | Prepare test data ------------------------------------------------------- 0.54s 2025-07-12 20:49:34.551126 | orchestrator | Pass if count of unencrypted OSDs equals count of OSDs ------------------ 0.53s 2025-07-12 20:49:34.551136 | orchestrator | Calculate total number of OSDs in cluster ------------------------------- 0.52s 2025-07-12 20:49:34.551145 | orchestrator | Set test result to failed if an OSD is not running ---------------------- 0.51s 2025-07-12 20:49:34.551155 | orchestrator | Get list of ceph-osd containers on host --------------------------------- 0.50s 2025-07-12 20:49:34.551164 | orchestrator | Set test result to passed if count matches ------------------------------ 0.48s 2025-07-12 20:49:34.551180 | orchestrator | Prepare test data ------------------------------------------------------- 0.47s 2025-07-12 20:49:34.836402 | orchestrator | Flush handlers ---------------------------------------------------------- 0.39s 2025-07-12 20:49:34.836508 | orchestrator | Calculate OSD devices for each host ------------------------------------- 0.36s 2025-07-12 20:49:34.836522 | orchestrator | Parse LVM data as JSON -------------------------------------------------- 0.35s 2025-07-12 20:49:35.150498 | orchestrator | + sh -c /opt/configuration/scripts/check/200-infrastructure.sh 2025-07-12 20:49:35.158213 | orchestrator | + set -e 2025-07-12 20:49:35.158306 | orchestrator | + source /opt/manager-vars.sh 2025-07-12 20:49:35.158321 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-07-12 20:49:35.158331 | orchestrator | ++ NUMBER_OF_NODES=6 2025-07-12 20:49:35.158339 | orchestrator | ++ export CEPH_VERSION=reef 2025-07-12 20:49:35.158348 | orchestrator | ++ CEPH_VERSION=reef 2025-07-12 20:49:35.158357 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-07-12 20:49:35.158507 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-07-12 20:49:35.158554 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 20:49:35.158588 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 20:49:35.158599 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-07-12 20:49:35.158607 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-07-12 20:49:35.158616 | orchestrator | ++ export ARA=false 2025-07-12 20:49:35.158625 | orchestrator | ++ ARA=false 2025-07-12 20:49:35.158674 | orchestrator | ++ export DEPLOY_MODE=manager 2025-07-12 20:49:35.158684 | orchestrator | ++ DEPLOY_MODE=manager 2025-07-12 20:49:35.158693 | orchestrator | ++ export TEMPEST=false 2025-07-12 20:49:35.158727 | orchestrator | ++ TEMPEST=false 2025-07-12 20:49:35.158736 | orchestrator | ++ export IS_ZUUL=true 2025-07-12 20:49:35.158745 | orchestrator | ++ IS_ZUUL=true 2025-07-12 20:49:35.158754 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 20:49:35.158785 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.109 2025-07-12 20:49:35.158795 | orchestrator | ++ export EXTERNAL_API=false 2025-07-12 20:49:35.158803 | orchestrator | ++ EXTERNAL_API=false 2025-07-12 20:49:35.158812 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-07-12 20:49:35.158825 | orchestrator | ++ IMAGE_USER=ubuntu 2025-07-12 20:49:35.158851 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-07-12 20:49:35.158865 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-07-12 20:49:35.158878 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-07-12 20:49:35.158889 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-07-12 20:49:35.158904 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-07-12 20:49:35.158920 | orchestrator | + source /etc/os-release 2025-07-12 20:49:35.158936 | orchestrator | ++ PRETTY_NAME='Ubuntu 24.04.2 LTS' 2025-07-12 20:49:35.158947 | orchestrator | ++ NAME=Ubuntu 2025-07-12 20:49:35.158956 | orchestrator | ++ VERSION_ID=24.04 2025-07-12 20:49:35.158966 | orchestrator | ++ VERSION='24.04.2 LTS (Noble Numbat)' 2025-07-12 20:49:35.158976 | orchestrator | ++ VERSION_CODENAME=noble 2025-07-12 20:49:35.159128 | orchestrator | ++ ID=ubuntu 2025-07-12 20:49:35.159230 | orchestrator | ++ ID_LIKE=debian 2025-07-12 20:49:35.159248 | orchestrator | ++ HOME_URL=https://www.ubuntu.com/ 2025-07-12 20:49:35.159263 | orchestrator | ++ SUPPORT_URL=https://help.ubuntu.com/ 2025-07-12 20:49:35.159277 | orchestrator | ++ BUG_REPORT_URL=https://bugs.launchpad.net/ubuntu/ 2025-07-12 20:49:35.159291 | orchestrator | ++ PRIVACY_POLICY_URL=https://www.ubuntu.com/legal/terms-and-policies/privacy-policy 2025-07-12 20:49:35.159305 | orchestrator | ++ UBUNTU_CODENAME=noble 2025-07-12 20:49:35.159318 | orchestrator | ++ LOGO=ubuntu-logo 2025-07-12 20:49:35.159330 | orchestrator | + [[ ubuntu == \u\b\u\n\t\u ]] 2025-07-12 20:49:35.159344 | orchestrator | + packages='libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client' 2025-07-12 20:49:35.159358 | orchestrator | + dpkg -s libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-12 20:49:35.179797 | orchestrator | + sudo apt-get install -y libmonitoring-plugin-perl libwww-perl libjson-perl monitoring-plugins-basic mysql-client 2025-07-12 20:49:57.453317 | orchestrator | 2025-07-12 20:49:57.453520 | orchestrator | # Status of Elasticsearch 2025-07-12 20:49:57.453542 | orchestrator | 2025-07-12 20:49:57.453555 | orchestrator | + pushd /opt/configuration/contrib 2025-07-12 20:49:57.453567 | orchestrator | + echo 2025-07-12 20:49:57.453579 | orchestrator | + echo '# Status of Elasticsearch' 2025-07-12 20:49:57.453590 | orchestrator | + echo 2025-07-12 20:49:57.453602 | orchestrator | + bash nagios-plugins/check_elasticsearch -H api-int.testbed.osism.xyz -s 2025-07-12 20:49:57.645312 | orchestrator | OK - elasticsearch (kolla_logging) is running. status: green; timed_out: false; number_of_nodes: 3; number_of_data_nodes: 3; active_primary_shards: 9; active_shards: 22; relocating_shards: 0; initializing_shards: 0; delayed_unassigned_shards: 0; unassigned_shards: 0 | 'active_primary'=9 'active'=22 'relocating'=0 'init'=0 'delay_unass'=0 'unass'=0 2025-07-12 20:49:57.645553 | orchestrator | 2025-07-12 20:49:57.645589 | orchestrator | # Status of MariaDB 2025-07-12 20:49:57.645611 | orchestrator | 2025-07-12 20:49:57.645630 | orchestrator | + echo 2025-07-12 20:49:57.645650 | orchestrator | + echo '# Status of MariaDB' 2025-07-12 20:49:57.645668 | orchestrator | + echo 2025-07-12 20:49:57.645687 | orchestrator | + MARIADB_USER=root_shard_0 2025-07-12 20:49:57.645701 | orchestrator | + bash nagios-plugins/check_galera_cluster -u root_shard_0 -p password -H api-int.testbed.osism.xyz -c 1 2025-07-12 20:49:57.724612 | orchestrator | Reading package lists... 2025-07-12 20:49:58.056363 | orchestrator | Building dependency tree... 2025-07-12 20:49:58.056864 | orchestrator | Reading state information... 2025-07-12 20:49:58.453855 | orchestrator | The following NEW packages will be installed: 2025-07-12 20:49:58.456357 | orchestrator | bc 2025-07-12 20:49:58.540408 | orchestrator | 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. 2025-07-12 20:49:58.540490 | orchestrator | Need to get 85.3 kB of archives. 2025-07-12 20:49:58.540500 | orchestrator | After this operation, 218 kB of additional disk space will be used. 2025-07-12 20:49:58.540508 | orchestrator | Get:1 http://de.archive.ubuntu.com/ubuntu noble/main amd64 bc amd64 1.07.1-3ubuntu4 [85.3 kB] 2025-07-12 20:49:58.962899 | orchestrator | debconf: unable to initialize frontend: Dialog 2025-07-12 20:49:58.962994 | orchestrator | debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.) 2025-07-12 20:49:58.963007 | orchestrator | debconf: falling back to frontend: Readline 2025-07-12 20:49:58.985349 | orchestrator | debconf: unable to initialize frontend: Readline 2025-07-12 20:49:58.985501 | orchestrator | debconf: (This frontend requires a controlling tty.) 2025-07-12 20:49:58.985525 | orchestrator | debconf: falling back to frontend: Teletype 2025-07-12 20:49:58.999457 | orchestrator | dpkg-preconfigure: unable to re-open stdin: 2025-07-12 20:49:59.066778 | orchestrator | Fetched 85.3 kB in 0s (777 kB/s) 2025-07-12 20:49:59.122270 | orchestrator | Selecting previously unselected package bc. 2025-07-12 20:49:59.196473 | orchestrator | (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 106188 files and directories currently installed.) 2025-07-12 20:49:59.202613 | orchestrator | Preparing to unpack .../bc_1.07.1-3ubuntu4_amd64.deb ... 2025-07-12 20:49:59.209611 | orchestrator | Unpacking bc (1.07.1-3ubuntu4) ... 2025-07-12 20:49:59.345146 | orchestrator | Setting up bc (1.07.1-3ubuntu4) ... 2025-07-12 20:49:59.369352 | orchestrator | Processing triggers for install-info (7.1-3build2) ... 2025-07-12 20:49:59.612858 | orchestrator | Processing triggers for man-db (2.12.0-4build2) ... 2025-07-12 20:50:01.141115 | orchestrator | Disabling Ubuntu mode, explicit restart mode configureddebconf: unable to initialize frontend: Dialog 2025-07-12 20:50:01.141235 | orchestrator | debconf: (Dialog frontend will not work on a dumb terminal, an emacs shell buffer, or without a controlling terminal.) 2025-07-12 20:50:01.141253 | orchestrator | debconf: falling back to frontend: Readline 2025-07-12 20:50:01.158311 | orchestrator | debconf: unable to initialize frontend: Readline 2025-07-12 20:50:01.158839 | orchestrator | debconf: (This frontend requires a controlling tty.) 2025-07-12 20:50:01.158872 | orchestrator | debconf: falling back to frontend: Teletype 2025-07-12 20:50:01.754880 | orchestrator | 2025-07-12 20:50:01.754985 | orchestrator | Running kernel seems to be up-to-date. 2025-07-12 20:50:01.755001 | orchestrator | 2025-07-12 20:50:01.755013 | orchestrator | No services need to be restarted. 2025-07-12 20:50:01.755025 | orchestrator | 2025-07-12 20:50:01.755036 | orchestrator | No containers need to be restarted. 2025-07-12 20:50:01.755047 | orchestrator | 2025-07-12 20:50:01.755059 | orchestrator | No user sessions are running outdated binaries. 2025-07-12 20:50:01.755070 | orchestrator | 2025-07-12 20:50:01.755108 | orchestrator | No VM guests are running outdated hypervisor (qemu) binaries on this host. 2025-07-12 20:50:04.162826 | orchestrator | OK: number of NODES = 3 (wsrep_cluster_size) 2025-07-12 20:50:04.163528 | orchestrator | 2025-07-12 20:50:04.163568 | orchestrator | # Status of Prometheus 2025-07-12 20:50:04.163583 | orchestrator | 2025-07-12 20:50:04.163598 | orchestrator | + echo 2025-07-12 20:50:04.163619 | orchestrator | + echo '# Status of Prometheus' 2025-07-12 20:50:04.163639 | orchestrator | + echo 2025-07-12 20:50:04.163653 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/healthy 2025-07-12 20:50:04.213639 | orchestrator | Unauthorized 2025-07-12 20:50:04.216848 | orchestrator | + curl -s https://api-int.testbed.osism.xyz:9091/-/ready 2025-07-12 20:50:04.293536 | orchestrator | Unauthorized 2025-07-12 20:50:04.297477 | orchestrator | 2025-07-12 20:50:04.297528 | orchestrator | # Status of RabbitMQ 2025-07-12 20:50:04.297541 | orchestrator | 2025-07-12 20:50:04.297552 | orchestrator | + echo 2025-07-12 20:50:04.297564 | orchestrator | + echo '# Status of RabbitMQ' 2025-07-12 20:50:04.297575 | orchestrator | + echo 2025-07-12 20:50:04.297587 | orchestrator | + perl nagios-plugins/check_rabbitmq_cluster --ssl 1 -H api-int.testbed.osism.xyz -u openstack -p password 2025-07-12 20:50:04.741967 | orchestrator | RABBITMQ_CLUSTER OK - nb_running_node OK (3) nb_running_disc_node OK (3) nb_running_ram_node OK (0) 2025-07-12 20:50:04.751303 | orchestrator | 2025-07-12 20:50:04.751389 | orchestrator | # Status of Redis 2025-07-12 20:50:04.751475 | orchestrator | 2025-07-12 20:50:04.751487 | orchestrator | + echo 2025-07-12 20:50:04.751499 | orchestrator | + echo '# Status of Redis' 2025-07-12 20:50:04.751511 | orchestrator | + echo 2025-07-12 20:50:04.751524 | orchestrator | + /usr/lib/nagios/plugins/check_tcp -H 192.168.16.10 -p 6379 -A -E -s 'AUTH QHNA1SZRlOKzLADhUd5ZDgpHfQe6dNfr3bwEdY24\r\nPING\r\nINFO replication\r\nQUIT\r\n' -e PONG -e role:master -e slave0:ip=192.168.16.1 -e,port=6379 -j 2025-07-12 20:50:04.758780 | orchestrator | TCP OK - 0.002 second response time on 192.168.16.10 port 6379|time=0.001766s;;;0.000000;10.000000 2025-07-12 20:50:04.758829 | orchestrator | 2025-07-12 20:50:04.758842 | orchestrator | # Create backup of MariaDB database 2025-07-12 20:50:04.758855 | orchestrator | 2025-07-12 20:50:04.758867 | orchestrator | + popd 2025-07-12 20:50:04.758878 | orchestrator | + echo 2025-07-12 20:50:04.758889 | orchestrator | + echo '# Create backup of MariaDB database' 2025-07-12 20:50:04.758900 | orchestrator | + echo 2025-07-12 20:50:04.758911 | orchestrator | + osism apply mariadb_backup -e mariadb_backup_type=full 2025-07-12 20:50:06.703679 | orchestrator | 2025-07-12 20:50:06 | INFO  | Task 76f72303-7180-40bb-b4b9-a09e03b17a50 (mariadb_backup) was prepared for execution. 2025-07-12 20:50:06.703793 | orchestrator | 2025-07-12 20:50:06 | INFO  | It takes a moment until task 76f72303-7180-40bb-b4b9-a09e03b17a50 (mariadb_backup) has been started and output is visible here. 2025-07-12 20:53:23.543108 | orchestrator | 2025-07-12 20:53:23.543235 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:53:23.543248 | orchestrator | 2025-07-12 20:53:23.543256 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:53:23.543264 | orchestrator | Saturday 12 July 2025 20:50:10 +0000 (0:00:00.167) 0:00:00.167 ********* 2025-07-12 20:53:23.543330 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:53:23.543340 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:53:23.543347 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:53:23.543355 | orchestrator | 2025-07-12 20:53:23.543363 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:53:23.543371 | orchestrator | Saturday 12 July 2025 20:50:10 +0000 (0:00:00.288) 0:00:00.456 ********* 2025-07-12 20:53:23.543379 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 20:53:23.543386 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 20:53:23.543394 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 20:53:23.543401 | orchestrator | 2025-07-12 20:53:23.543409 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 20:53:23.543416 | orchestrator | 2025-07-12 20:53:23.543424 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 20:53:23.543431 | orchestrator | Saturday 12 July 2025 20:50:11 +0000 (0:00:00.490) 0:00:00.946 ********* 2025-07-12 20:53:23.543439 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:53:23.543461 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 20:53:23.543469 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 20:53:23.543476 | orchestrator | 2025-07-12 20:53:23.543484 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:53:23.543491 | orchestrator | Saturday 12 July 2025 20:50:11 +0000 (0:00:00.391) 0:00:01.338 ********* 2025-07-12 20:53:23.543499 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:53:23.543507 | orchestrator | 2025-07-12 20:53:23.543535 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-07-12 20:53:23.543542 | orchestrator | Saturday 12 July 2025 20:50:12 +0000 (0:00:00.496) 0:00:01.835 ********* 2025-07-12 20:53:23.543550 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:53:23.543557 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:53:23.543564 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:53:23.543571 | orchestrator | 2025-07-12 20:53:23.544101 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-07-12 20:53:23.544114 | orchestrator | Saturday 12 July 2025 20:50:15 +0000 (0:00:02.858) 0:00:04.693 ********* 2025-07-12 20:53:23.544122 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:53:23.544130 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:53:23.544137 | orchestrator | 2025-07-12 20:53:23.544144 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2025-07-12 20:53:23.544152 | orchestrator | 2025-07-12 20:53:23.544159 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2025-07-12 20:53:23.556875 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": true, "msg": "Container exited with non-zero return code 1", "rc": 1, "stderr": "INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json\nINFO:__main__:Validating config file\nINFO:__main__:Kolla config strategy set to: COPY_ALWAYS\nINFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf\nINFO:__main__:Copying service configuration files\nINFO:__main__:Deleting /etc/mysql/my.cnf\nINFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf\nINFO:__main__:Setting permission for /etc/mysql/my.cnf\nINFO:__main__:Writing out command to execute\nINFO:__main__:Setting permission for /var/log/kolla/mariadb\nINFO:__main__:Setting permission for /backup\n[00] 2025-07-12 20:50:19 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-07-12 20:50:19 Using server version 10.11.13-MariaDB-deb12-log\nmariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)\n[00] 2025-07-12 20:50:19 uses posix_fadvise().\n[00] 2025-07-12 20:50:19 cd to /var/lib/mysql/\n[00] 2025-07-12 20:50:19 open files limit requested 0, set to 1048576\n[00] 2025-07-12 20:50:19 mariabackup: using the following InnoDB configuration:\n[00] 2025-07-12 20:50:19 innodb_data_home_dir = \n[00] 2025-07-12 20:50:19 innodb_data_file_path = ibdata1:12M:autoextend\n[00] 2025-07-12 20:50:19 innodb_log_group_home_dir = ./\n[00] 2025-07-12 20:50:19 InnoDB: Using liburing\n2025-07-12 20:50:19 0 [Note] InnoDB: Number of transaction pools: 1\nmariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).\n2025-07-12 20:50:19 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF\n2025-07-12 20:50:19 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)\n[00] 2025-07-12 20:50:19 mariabackup: Generating a list of tablespaces\n[00] 2025-07-12 20:50:27 DDL tracking : create 9 \"./horizon/django_migrations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 10 \"./horizon/django_content_type.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 11 \"./horizon/#sql-alter-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 10 \"./horizon/django_content_type.ibd\",\"./horizon/#sql-ib24.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 11 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/django_content_type.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 10 \"./horizon/#sql-ib24.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 12 \"./horizon/auth_permission.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 13 \"./horizon/auth_group.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 14 \"./horizon/auth_group_permissions.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 15 \"./horizon/#sql-alter-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 12 \"./horizon/auth_permission.ibd\",\"./horizon/#sql-backup-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 15 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/auth_permission.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 12 \"./horizon/#sql-backup-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 16 \"./horizon/#sql-alter-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 14 \"./horizon/auth_group_permissions.ibd\",\"./horizon/#sql-backup-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 16 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/auth_group_permissions.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 14 \"./horizon/#sql-backup-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 17 \"./horizon/#sql-alter-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 16 \"./horizon/auth_group_permissions.ibd\",\"./horizon/#sql-backup-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 17 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/auth_group_permissions.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 16 \"./horizon/#sql-backup-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 18 \"./horizon/#sql-alter-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 15 \"./horizon/auth_permission.ibd\",\"./horizon/#sql-backup-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 18 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/auth_permission.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 15 \"./horizon/#sql-backup-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 19 \"./horizon/#sql-alter-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 13 \"./horizon/auth_group.ibd\",\"./horizon/#sql-backup-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 19 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/auth_group.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 13 \"./horizon/#sql-backup-dc-7b.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 20 \"./horizon/django_session.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 21 \"./keystone/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 22 \"./keystone/application_credential.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 23 \"./keystone/assignment.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 24 \"./keystone/access_rule.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 25 \"./keystone/config_register.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 26 \"./keystone/consumer.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 27 \"./keystone/credential.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 28 \"./keystone/group.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 29 \"./keystone/id_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 30 \"./keystone/identity_provider.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 31 \"./keystone/idp_remote_ids.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 32 \"./keystone/mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 33 \"./keystone/policy.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 34 \"./keystone/policy_association.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 35 \"./keystone/project.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 36 \"./keystone/project_endpoint.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 37 \"./keystone/project_option.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 38 \"./keystone/project_tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 39 \"./keystone/region.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 40 \"./keystone/registered_limit.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 41 \"./keystone/request_token.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 42 \"./keystone/revocation_event.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 43 \"./keystone/role.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 44 \"./keystone/role_option.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 45 \"./keystone/sensitive_config.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 46 \"./keystone/service.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 47 \"./keystone/service_provider.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 48 \"./keystone/system_assignment.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 49 \"./keystone/token.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 50 \"./keystone/trust.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 51 \"./keystone/trust_role.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 52 \"./keystone/user.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 53 \"./keystone/user_group_membership.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 54 \"./keystone/user_option.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 55 \"./keystone/whitelisted_config.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 56 \"./keystone/access_token.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 57 \"./keystone/application_credential_role.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 58 \"./keystone/application_credential_access_rule.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 59 \"./keystone/endpoint.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 60 \"./keystone/endpoint_group.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 61 \"./keystone/expiring_user_group_membership.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 62 \"./keystone/federation_protocol.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 63 \"./keystone/implied_role.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 64 \"./keystone/limit.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 65 \"./keystone/local_user.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 66 \"./keystone/nonlocal_user.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 67 \"./keystone/password.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 68 \"./keystone/project_endpoint_group.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 69 \"./keystone/federated_user.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 70 \"./nova_api/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 71 \"./nova_api/cell_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 72 \"./nova_api/host_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 73 \"./nova_api/instance_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 74 \"./nova_api/flavors.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 75 \"./nova_api/flavor_extra_specs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 76 \"./nova_api/flavor_projects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 77 \"./nova_api/request_specs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 78 \"./nova_api/build_requests.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 79 \"./nova_api/key_pairs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 80 \"./nova_api/projects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 81 \"./nova_api/users.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 82 \"./nova_api/resource_classes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 83 \"./nova_api/resource_providers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 84 \"./nova_api/inventories.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 85 \"./nova_api/traits.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 86 \"./nova_api/allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 87 \"./nova_api/consumers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 88 \"./nova_api/resource_provider_aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 89 \"./nova_api/resource_provider_traits.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 90 \"./nova_api/placement_aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 91 \"./nova_api/aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 92 \"./nova_api/aggregate_hosts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 93 \"./nova_api/aggregate_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 94 \"./nova_api/instance_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 95 \"./nova_api/instance_group_policy.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 96 \"./nova_api/instance_group_member.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 97 \"./nova_api/quota_classes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 98 \"./nova_api/quota_usages.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 99 \"./nova_api/quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 100 \"./nova_api/project_user_quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 101 \"./nova_api/reservations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 102 \"./nova_cell0/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 103 \"./nova_cell0/instances.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 104 \"./nova_cell0/agent_builds.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 105 \"./nova_cell0/aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 106 \"./nova_cell0/aggregate_hosts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 107 \"./nova_cell0/aggregate_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 108 \"./nova_cell0/allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 109 \"./nova_cell0/block_device_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 110 \"./nova_cell0/bw_usage_cache.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 111 \"./nova_cell0/cells.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 112 \"./nova_cell0/certificates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 113 \"./nova_cell0/compute_nodes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 114 \"./nova_cell0/console_auth_tokens.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 115 \"./nova_cell0/console_pools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 116 \"./nova_cell0/consoles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 117 \"./nova_cell0/dns_domains.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 118 \"./nova_cell0/fixed_ips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 119 \"./nova_cell0/floating_ips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 120 \"./nova_cell0/instance_faults.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 121 \"./nova_cell0/instance_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 122 \"./nova_cell0/instance_info_caches.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 123 \"./nova_cell0/instance_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 124 \"./nova_cell0/instance_group_policy.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 125 \"./nova_cell0/instance_group_member.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 126 \"./nova_cell0/instance_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 127 \"./nova_cell0/instance_system_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 128 \"./nova_cell0/instance_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 129 \"./nova_cell0/instance_type_extra_specs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 130 \"./nova_cell0/instance_type_projects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 131 \"./nova_cell0/instance_actions.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 132 \"./nova_cell0/instance_actions_events.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 133 \"./nova_cell0/instance_extra.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 134 \"./nova_cell0/inventories.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 135 \"./nova_cell0/key_pairs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 136 \"./nova_cell0/migrations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 137 \"./nova_cell0/networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 138 \"./nova_cell0/pci_devices.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 139 \"./nova_cell0/provider_fw_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 140 \"./nova_cell0/quota_classes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 141 \"./nova_cell0/quota_usages.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 142 \"./nova_cell0/quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 143 \"./nova_cell0/project_user_quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 144 \"./nova_cell0/reservations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 145 \"./nova_cell0/resource_providers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 146 \"./nova_cell0/resource_provider_aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 147 \"./nova_cell0/s3_images.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 148 \"./nova_cell0/security_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 149 \"./nova_cell0/security_group_instance_association.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 150 \"./nova_cell0/security_group_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 151 \"./nova_cell0/security_group_default_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 152 \"./nova_cell0/services.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 153 \"./nova_cell0/snapshot_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 154 \"./nova_cell0/snapshots.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 155 \"./nova_cell0/tags.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 156 \"./nova_cell0/task_log.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 157 \"./nova_cell0/virtual_interfaces.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 158 \"./nova_cell0/volume_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 159 \"./nova_cell0/volume_usage_cache.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 160 \"./nova_cell0/#sql-alter-dc-11a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 145 \"./nova_cell0/resource_providers.ibd\",\"./nova_cell0/#sql-backup-dc-11a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 160 \"./nova_cell0/#sql-alter-dc-11a.ibd\",\"./nova_cell0/resource_providers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 145 \"./nova_cell0/#sql-backup-dc-11a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 161 \"./nova_cell0/shadow_agent_builds.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 162 \"./nova_cell0/shadow_aggregate_hosts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 163 \"./nova_cell0/shadow_aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 164 \"./nova_cell0/shadow_aggregate_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 165 \"./nova_cell0/shadow_alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 166 \"./nova_cell0/shadow_block_device_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 167 \"./nova_cell0/shadow_instances.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 168 \"./nova_cell0/shadow_bw_usage_cache.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 169 \"./nova_cell0/shadow_cells.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 170 \"./nova_cell0/shadow_certificates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 171 \"./nova_cell0/shadow_compute_nodes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 172 \"./nova_cell0/shadow_console_pools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 173 \"./nova_cell0/shadow_consoles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 174 \"./nova_cell0/shadow_dns_domains.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 175 \"./nova_cell0/shadow_fixed_ips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 176 \"./nova_cell0/shadow_floating_ips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 177 \"./nova_cell0/shadow_instance_actions.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 178 \"./nova_cell0/shadow_instance_actions_events.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 179 \"./nova_cell0/shadow_instance_extra.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 180 \"./nova_cell0/shadow_instance_faults.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 181 \"./nova_cell0/shadow_instance_group_member.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 182 \"./nova_cell0/shadow_instance_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 183 \"./nova_cell0/shadow_instance_group_policy.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 184 \"./nova_cell0/shadow_instance_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 185 \"./nova_cell0/shadow_instance_info_caches.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 186 \"./nova_cell0/shadow_instance_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 187 \"./nova_cell0/shadow_instance_system_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 188 \"./nova_cell0/shadow_instance_type_extra_specs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 189 \"./nova_cell0/shadow_instance_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 190 \"./nova_cell0/shadow_instance_type_projects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 191 \"./nova_cell0/shadow_key_pairs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 192 \"./nova_cell0/shadow_migrations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 193 \"./nova_cell0/shadow_networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 194 \"./nova_cell0/shadow_pci_devices.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 195 \"./nova_cell0/shadow_project_user_quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 196 \"./nova_cell0/shadow_provider_fw_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 197 \"./nova_cell0/shadow_quota_classes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 198 \"./nova_cell0/shadow_quota_usages.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 199 \"./nova_cell0/shadow_quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 200 \"./nova_cell0/shadow_reservations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 201 \"./nova_cell0/shadow_s3_images.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 202 \"./nova_cell0/shadow_security_group_default_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 203 \"./nova_cell0/shadow_security_group_instance_association.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 204 \"./nova_cell0/shadow_security_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 205 \"./nova_cell0/shadow_security_group_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 206 \"./nova_cell0/shadow_services.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 207 \"./nova_cell0/shadow_snapshot_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 208 \"./nova_cell0/shadow_snapshots.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 209 \"./nova_cell0/shadow_task_log.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 210 \"./nova_cell0/shadow_virtual_interfaces.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 211 \"./nova_cell0/shadow_volume_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 212 \"./nova_cell0/shadow_volume_usage_cache.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 213 \"./nova_cell0/share_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 214 \"./cinder/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 215 \"./cinder/services.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 216 \"./cinder/consistencygroups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 217 \"./cinder/cgsnapshots.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 218 \"./cinder/groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 219 \"./cinder/group_snapshots.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 220 \"./cinder/volumes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 221 \"./cinder/volume_attachment.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 222 \"./cinder/attachment_specs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 223 \"./cinder/snapshots.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 224 \"./cinder/snapshot_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 225 \"./cinder/quality_of_service_specs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 226 \"./cinder/volume_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 227 \"./cinder/volume_type_projects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 228 \"./cinder/volume_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 229 \"./cinder/volume_type_extra_specs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 230 \"./cinder/quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 231 \"./cinder/quota_classes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 232 \"./cinder/quota_usages.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 233 \"./cinder/reservations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 234 \"./cinder/volume_glance_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 235 \"./cinder/backups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 236 \"./cinder/backup_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 237 \"./cinder/transfers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 238 \"./cinder/encryption.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 239 \"./cinder/volume_admin_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 240 \"./cinder/driver_initiator_data.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 241 \"./cinder/image_volume_cache_entries.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 242 \"./cinder/messages.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 243 \"./cinder/clusters.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 244 \"./cinder/workers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 245 \"./cinder/group_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 246 \"./cinder/group_type_specs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 247 \"./cinder/group_type_projects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 248 \"./cinder/group_volume_type_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 249 \"./cinder/default_volume_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 250 \"./cinder/#sql-alter-dc-11a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 241 \"./cinder/image_volume_cache_entries.ibd\",\"./cinder/#sql-ib263.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 250 \"./cinder/#sql-alter-dc-11a.ibd\",\"./cinder/image_volume_cache_entries.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 241 \"./cinder/#sql-ib263.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 251 \"./cinder/#sql-alter-dc-11a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 244 \"./cinder/workers.ibd\",\"./cinder/#sql-backup-dc-11a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 251 \"./cinder/#sql-alter-dc-11a.ibd\",\"./cinder/workers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 244 \"./cinder/#sql-backup-dc-11a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 252 \"./cinder/#sql-alter-dc-11a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 220 \"./cinder/volumes.ibd\",\"./cinder/#sql-ib265.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 252 \"./cinder/#sql-alter-dc-11a.ibd\",\"./cinder/volumes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 220 \"./cinder/#sql-ib265.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 253 \"./cinder/#sql-alter-dc-11a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 223 \"./cinder/snapshots.ibd\",\"./cinder/#sql-ib266.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 253 \"./cinder/#sql-alter-dc-11a.ibd\",\"./cinder/snapshots.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 223 \"./cinder/#sql-ib266.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 254 \"./glance/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 255 \"./glance/images.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 256 \"./glance/image_properties.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 257 \"./glance/image_locations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 258 \"./glance/image_members.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 259 \"./glance/image_tags.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 260 \"./glance/tasks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 261 \"./glance/task_info.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 262 \"./glance/metadef_namespaces.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 263 \"./glance/metadef_resource_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 264 \"./glance/metadef_namespace_resource_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 265 \"./glance/metadef_objects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 266 \"./glance/metadef_properties.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 267 \"./glance/metadef_tags.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 268 \"./glance/artifacts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 269 \"./glance/artifact_blobs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 270 \"./glance/artifact_dependencies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 271 \"./glance/artifact_properties.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 272 \"./glance/artifact_tags.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 273 \"./glance/artifact_blob_locations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 274 \"./glance/#sql-alter-dc-98.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 255 \"./glance/images.ibd\",\"./glance/#sql-ib287.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 274 \"./glance/#sql-alter-dc-98.ibd\",\"./glance/images.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 255 \"./glance/#sql-ib287.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 275 \"./glance/node_reference.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 276 \"./glance/cached_images.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 277 \"./glance/#sql-alter-dc-98.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 274 \"./glance/images.ibd\",\"./glance/#sql-ib290.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 277 \"./glance/#sql-alter-dc-98.ibd\",\"./glance/images.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 274 \"./glance/#sql-ib290.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 273 \"./glance/artifact_blob_locations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 271 \"./glance/artifact_properties.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 269 \"./glance/artifact_blobs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 270 \"./glance/artifact_dependencies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 272 \"./glance/artifact_tags.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 268 \"./glance/artifacts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 278 \"./nova/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 279 \"./nova/instances.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 280 \"./nova/agent_builds.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 281 \"./nova/aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 282 \"./nova/aggregate_hosts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 283 \"./nova/aggregate_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 284 \"./nova/allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 285 \"./nova/block_device_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 286 \"./nova/bw_usage_cache.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 287 \"./nova/cells.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 288 \"./nova/certificates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 289 \"./nova/compute_nodes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 290 \"./nova/console_auth_tokens.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 291 \"./nova/console_pools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 292 \"./nova/consoles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 293 \"./nova/dns_domains.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 294 \"./nova/fixed_ips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 295 \"./nova/floating_ips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 296 \"./nova/instance_faults.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 297 \"./nova/instance_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 298 \"./nova/instance_info_caches.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 299 \"./nova/instance_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 300 \"./nova/instance_group_policy.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 301 \"./nova/instance_group_member.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 302 \"./nova/instance_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 303 \"./nova/instance_system_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 304 \"./nova/instance_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 305 \"./nova/instance_type_extra_specs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 306 \"./nova/instance_type_projects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 307 \"./nova/instance_actions.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 308 \"./nova/instance_actions_events.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 309 \"./nova/instance_extra.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 310 \"./nova/inventories.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 311 \"./nova/key_pairs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 312 \"./nova/migrations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 313 \"./nova/networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 314 \"./nova/pci_devices.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 315 \"./nova/provider_fw_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 316 \"./nova/quota_classes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 317 \"./nova/quota_usages.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 318 \"./nova/quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 319 \"./nova/project_user_quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 320 \"./nova/reservations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 321 \"./nova/resource_providers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 322 \"./nova/resource_provider_aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 323 \"./nova/s3_images.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 324 \"./nova/security_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 325 \"./nova/security_group_instance_association.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 326 \"./nova/security_group_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 327 \"./nova/security_group_default_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 328 \"./nova/services.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 329 \"./nova/snapshot_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 330 \"./nova/snapshots.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 331 \"./nova/tags.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 332 \"./nova/task_log.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 333 \"./nova/virtual_interfaces.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 334 \"./nova/volume_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 335 \"./nova/volume_usage_cache.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 336 \"./nova/#sql-alter-dc-23f.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 321 \"./nova/resource_providers.ibd\",\"./nova/#sql-backup-dc-23f.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 336 \"./nova/#sql-alter-dc-23f.ibd\",\"./nova/resource_providers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 321 \"./nova/#sql-backup-dc-23f.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 337 \"./nova/shadow_agent_builds.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 338 \"./nova/shadow_aggregate_hosts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 339 \"./nova/shadow_aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 340 \"./nova/shadow_aggregate_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 341 \"./nova/shadow_alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 342 \"./nova/shadow_block_device_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 343 \"./nova/shadow_instances.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 344 \"./nova/shadow_bw_usage_cache.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 345 \"./nova/shadow_cells.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 346 \"./nova/shadow_certificates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 347 \"./nova/shadow_compute_nodes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 348 \"./nova/shadow_console_pools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 349 \"./nova/shadow_consoles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 350 \"./nova/shadow_dns_domains.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 351 \"./nova/shadow_fixed_ips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 352 \"./nova/shadow_floating_ips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 353 \"./nova/shadow_instance_actions.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 354 \"./nova/shadow_instance_actions_events.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 355 \"./nova/shadow_instance_extra.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 356 \"./nova/shadow_instance_faults.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 357 \"./nova/shadow_instance_group_member.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 358 \"./nova/shadow_instance_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 359 \"./nova/shadow_instance_group_policy.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 360 \"./nova/shadow_instance_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 361 \"./nova/shadow_instance_info_caches.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 362 \"./nova/shadow_instance_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 363 \"./nova/shadow_instance_system_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 364 \"./nova/shadow_instance_type_extra_specs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 365 \"./nova/shadow_instance_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 366 \"./nova/shadow_instance_type_projects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 367 \"./nova/shadow_key_pairs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 368 \"./nova/shadow_migrations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 369 \"./nova/shadow_networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 370 \"./nova/shadow_pci_devices.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 371 \"./nova/shadow_project_user_quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 372 \"./nova/shadow_provider_fw_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 373 \"./nova/shadow_quota_classes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 374 \"./nova/shadow_quota_usages.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 375 \"./nova/shadow_quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 376 \"./nova/shadow_reservations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 377 \"./nova/shadow_s3_images.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 378 \"./nova/shadow_security_group_default_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 379 \"./nova/shadow_security_group_instance_association.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 380 \"./nova/shadow_security_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 381 \"./nova/shadow_security_group_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 382 \"./nova/shadow_services.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 383 \"./nova/shadow_snapshot_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 384 \"./nova/shadow_snapshots.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 385 \"./nova/shadow_task_log.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 386 \"./nova/shadow_virtual_interfaces.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 387 \"./nova/shadow_volume_id_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 388 \"./nova/shadow_volume_usage_cache.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 389 \"./nova/share_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 390 \"./barbican/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 391 \"./barbican/projects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 392 \"./barbican/secret_stores.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 393 \"./barbican/transport_keys.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 394 \"./barbican/certificate_authorities.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 395 \"./barbican/containers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 396 \"./barbican/kek_data.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 397 \"./barbican/project_quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 398 \"./barbican/project_secret_store.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 399 \"./barbican/secrets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 400 \"./barbican/certificate_authority_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 401 \"./barbican/container_acls.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 402 \"./barbican/container_consumer_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 403 \"./barbican/container_secret.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 404 \"./barbican/encrypted_data.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 405 \"./barbican/orders.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 406 \"./barbican/preferred_certificate_authorities.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 407 \"./barbican/project_certificate_authorities.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 408 \"./barbican/secret_acls.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 409 \"./barbican/secret_store_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 410 \"./barbican/secret_user_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 411 \"./barbican/container_acl_users.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 412 \"./barbican/order_barbican_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 413 \"./barbican/order_plugin_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 414 \"./barbican/order_retry_tasks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 415 \"./barbican/secret_acl_users.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 416 \"./barbican/secret_consumer_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 417 \"./barbican/#sql-alter-dc-345.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 416 \"./barbican/secret_consumer_metadata.ibd\",\"./barbican/#sql-ib430.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 417 \"./barbican/#sql-alter-dc-345.ibd\",\"./barbican/secret_consumer_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 416 \"./barbican/#sql-ib430.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 418 \"./barbican/#sql-alter-dc-345.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 417 \"./barbican/secret_consumer_metadata.ibd\",\"./barbican/#sql-backup-dc-345.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 418 \"./barbican/#sql-alter-dc-345.ibd\",\"./barbican/secret_consumer_metadata.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 417 \"./barbican/#sql-backup-dc-345.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 419 \"./designate/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 420 \"./designate/pools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 421 \"./designate/pool_ns_records.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 422 \"./designate/pool_attributes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 423 \"./designate/domains.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 424 \"./designate/domain_attributes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 425 \"./designate/recordsets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 426 \"./designate/records.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 427 \"./designate/quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 428 \"./designate/tsigkeys.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 429 \"./designate/tlds.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 430 \"./designate/zone_transfer_requests.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 431 \"./designate/zone_transfer_accepts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 432 \"./designate/zone_tasks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 433 \"./designate/blacklists.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 423 \"./designate/domains.ibd\",\"./designate/zones.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 434 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 424 \"./designate/domain_attributes.ibd\",\"./designate/#sql-ib447.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 434 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/domain_attributes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 424 \"./designate/#sql-ib447.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 434 \"./designate/domain_attributes.ibd\",\"./designate/zone_attributes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 435 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 425 \"./designate/recordsets.ibd\",\"./designate/#sql-ib448.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 435 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/recordsets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 425 \"./designate/#sql-ib448.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 436 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 435 \"./designate/recordsets.ibd\",\"./designate/#sql-ib449.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 436 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/recordsets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 435 \"./designate/#sql-ib449.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 437 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 426 \"./designate/records.ibd\",\"./designate/#sql-ib450.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 437 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/records.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 426 \"./designate/#sql-ib450.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 438 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 437 \"./designate/records.ibd\",\"./designate/#sql-ib451.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 438 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/records.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 437 \"./designate/#sql-ib451.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 439 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 430 \"./designate/zone_transfer_requests.ibd\",\"./designate/#sql-ib452.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 439 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_transfer_requests.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 430 \"./designate/#sql-ib452.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 440 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 431 \"./designate/zone_transfer_accepts.ibd\",\"./designate/#sql-ib453.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 440 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_transfer_accepts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 431 \"./designate/#sql-ib453.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 441 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 423 \"./designate/zones.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 441 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zones.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 423 \"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 442 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 434 \"./designate/zone_attributes.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 442 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_attributes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 434 \"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 443 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 436 \"./designate/recordsets.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 443 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/recordsets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 436 \"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 444 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 438 \"./designate/records.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 444 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/records.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 438 \"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 445 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 439 \"./designate/zone_transfer_requests.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 445 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_transfer_requests.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 439 \"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 446 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 440 \"./designate/zone_transfer_accepts.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 446 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_transfer_accepts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 440 \"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 447 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 432 \"./designate/zone_tasks.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 447 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_tasks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 432 \"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 448 \"./designate/zone_masters.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 449 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 442 \"./designate/zone_attributes.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 449 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_attributes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 442 \"./designate/#sql-backup-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 450 \"./designate/pool_nameservers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 451 \"./designate/pool_targets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 452 \"./designate/pool_target_masters.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 453 \"./designate/pool_target_options.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 454 \"./designate/pool_also_notifies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 455 \"./designate/service_statuses.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 456 \"./designate/shared_zones.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 457 \"./designate/#sql-alter-dc-3d3.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 441 \"./designate/zones.ibd\",\"./designate/#sql-ib470.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 457 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zones.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 441 \"./designate/#sql-ib470.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 458 \"./neutron/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 459 \"./neutron/agents.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 460 \"./neutron/networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 461 \"./neutron/ports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 462 \"./neutron/subnets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 463 \"./neutron/dnsnameservers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 464 \"./neutron/ipallocationpools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 465 \"./neutron/subnetroutes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 466 \"./neutron/ipallocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 467 \"./neutron/ipavailabilityranges.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 468 \"./neutron/networkdhcpagentbindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 469 \"./neutron/externalnetworks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 470 \"./neutron/routers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 471 \"./neutron/floatingips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 472 \"./neutron/routerroutes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 473 \"./neutron/routerl3agentbindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 474 \"./neutron/router_extra_attributes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 475 \"./neutron/ha_router_agent_port_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 476 \"./neutron/ha_router_networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 477 \"./neutron/ha_router_vrid_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 478 \"./neutron/routerports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 479 \"./neutron/securitygroups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 480 \"./neutron/securitygrouprules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 481 \"./neutron/securitygroupportbindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 482 \"./neutron/default_security_group.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 483 \"./neutron/networksecuritybindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 484 \"./neutron/portsecuritybindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 485 \"./neutron/providerresourceassociations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 486 \"./neutron/quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 487 \"./neutron/allowedaddresspairs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 488 \"./neutron/portbindingports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 489 \"./neutron/extradhcpopts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 490 \"./neutron/subnetpools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 491 \"./neutron/subnetpoolprefixes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 492 \"./neutron/network_states.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 493 \"./neutron/network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 494 \"./neutron/ovs_tunnel_endpoints.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 495 \"./neutron/ovs_tunnel_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 496 \"./neutron/ovs_vlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 497 \"./neutron/ovs_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 498 \"./neutron/ml2_vlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 499 \"./neutron/ml2_vxlan_endpoints.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 500 \"./neutron/ml2_gre_endpoints.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 501 \"./neutron/ml2_vxlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 502 \"./neutron/ml2_gre_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 503 \"./neutron/ml2_flat_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 504 \"./neutron/ml2_network_segments.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 505 \"./neutron/ml2_port_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 506 \"./neutron/ml2_port_binding_levels.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 507 \"./neutron/cisco_ml2_nexusport_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 508 \"./neutron/arista_provisioned_nets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 509 \"./neutron/arista_provisioned_vms.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 510 \"./neutron/arista_provisioned_tenants.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 511 \"./neutron/ml2_nexus_vxlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 512 \"./neutron/ml2_nexus_vxlan_mcast_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 513 \"./neutron/cisco_ml2_nexus_nve.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 514 \"./neutron/dvr_host_macs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 515 \"./neutron/ml2_dvr_port_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 516 \"./neutron/csnat_l3_agent_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 517 \"./neutron/firewall_policies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 518 \"./neutron/firewalls.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 519 \"./neutron/firewall_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 520 \"./neutron/healthmonitors.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 521 \"./neutron/vips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 522 \"./neutron/pools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 523 \"./neutron/sessionpersistences.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 524 \"./neutron/poolloadbalanceragentbindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 525 \"./neutron/members.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 526 \"./neutron/poolmonitorassociations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 527 \"./neutron/poolstatisticss.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 528 \"./neutron/embrane_pool_port.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 529 \"./neutron/ipsecpolicies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 530 \"./neutron/ikepolicies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 531 \"./neutron/vpnservices.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 532 \"./neutron/ipsec_site_connections.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 533 \"./neutron/ipsecpeercidrs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 534 \"./neutron/meteringlabels.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 535 \"./neutron/meteringlabelrules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 536 \"./neutron/brocadenetworks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 537 \"./neutron/brocadeports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 538 \"./neutron/ml2_brocadenetworks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 539 \"./neutron/ml2_brocadeports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 540 \"./neutron/cisco_policy_profiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 541 \"./neutron/cisco_network_profiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 542 \"./neutron/cisco_n1kv_vxlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 543 \"./neutron/cisco_n1kv_vlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 544 \"./neutron/cisco_credentials.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 545 \"./neutron/cisco_qos_policies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 546 \"./neutron/cisco_n1kv_profile_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 547 \"./neutron/cisco_n1kv_vmnetworks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 548 \"./neutron/cisco_n1kv_trunk_segments.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 549 \"./neutron/cisco_provider_networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 550 \"./neutron/cisco_n1kv_multi_segments.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 551 \"./neutron/cisco_n1kv_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 552 \"./neutron/cisco_n1kv_port_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 553 \"./neutron/cisco_csr_identifier_map.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 554 \"./neutron/cisco_ml2_apic_host_links.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 555 \"./neutron/cisco_ml2_apic_names.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 556 \"./neutron/cisco_ml2_apic_contracts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 557 \"./neutron/cisco_hosting_devices.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 558 \"./neutron/cisco_port_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 559 \"./neutron/cisco_router_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 560 \"./neutron/cisco_ml2_n1kv_policy_profiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 561 \"./neutron/cisco_ml2_n1kv_network_profiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 562 \"./neutron/cisco_ml2_n1kv_port_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 563 \"./neutron/cisco_ml2_n1kv_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 564 \"./neutron/cisco_ml2_n1kv_vxlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 565 \"./neutron/cisco_ml2_n1kv_vlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 566 \"./neutron/cisco_ml2_n1kv_profile_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 567 \"./neutron/ml2_ucsm_port_profiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 568 \"./neutron/ofcportmappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 569 \"./neutron/ofcroutermappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 570 \"./neutron/routerproviders.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 571 \"./neutron/ofctenantmappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 572 \"./neutron/ofcfiltermappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 573 \"./neutron/ofcnetworkmappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 574 \"./neutron/packetfilters.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 575 \"./neutron/portinfos.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 576 \"./neutron/networkflavors.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 577 \"./neutron/routerflavors.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 578 \"./neutron/routerrules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 579 \"./neutron/nexthops.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 580 \"./neutron/consistencyhashes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 581 \"./neutron/tz_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 582 \"./neutron/multi_provider_networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 583 \"./neutron/vcns_router_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 584 \"./neutron/networkgateways.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 585 \"./neutron/networkconnections.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 586 \"./neutron/qosqueues.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 587 \"./neutron/networkqueuemappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 588 \"./neutron/portqueuemappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 589 \"./neutron/maclearningstates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 590 \"./neutron/neutron_nsx_port_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 591 \"./neutron/lsn.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 592 \"./neutron/lsn_port.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 593 \"./neutron/neutron_nsx_network_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 594 \"./neutron/neutron_nsx_router_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 595 \"./neutron/neutron_nsx_security_group_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 596 \"./neutron/networkgatewaydevicereferences.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 597 \"./neutron/networkgatewaydevices.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 598 \"./neutron/nuage_net_partitions.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 599 \"./neutron/nuage_subnet_l2dom_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 600 \"./neutron/nuage_net_partition_router_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 601 \"./neutron/nuage_provider_net_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 602 \"./neutron/nsxv_router_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 603 \"./neutron/nsxv_internal_networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 604 \"./neutron/nsxv_internal_edges.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 605 \"./neutron/nsxv_firewall_rule_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 606 \"./neutron/nsxv_edge_dhcp_static_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 607 \"./neutron/nsxv_edge_vnic_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 608 \"./neutron/nsxv_spoofguard_policy_network_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 609 \"./neutron/nsxv_security_group_section_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 610 \"./neutron/nsxv_tz_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 611 \"./neutron/nsxv_port_vnic_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 612 \"./neutron/nsxv_port_index_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 613 \"./neutron/nsxv_rule_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 614 \"./neutron/nsxv_router_ext_attributes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 615 \"./neutron/nsxv_vdr_dhcp_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 616 \"./neutron/ipamsubnets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 617 \"./neutron/ipamallocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 618 \"./neutron/ipamallocationpools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 619 \"./neutron/ipamavailabilityranges.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 620 \"./neutron/address_scopes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 621 \"./neutron/flavors.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 622 \"./neutron/serviceprofiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 623 \"./neutron/flavorserviceprofilebindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 624 \"./neutron/networkrbacs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 625 \"./neutron/quotausages.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 626 \"./neutron/qos_policies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 627 \"./neutron/qos_network_policy_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 628 \"./neutron/qos_port_policy_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 629 \"./neutron/qos_bandwidth_limit_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 630 \"./neutron/reservations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 631 \"./neutron/resourcedeltas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 632 \"./neutron/standardattributes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 633 \"./neutron/networkdnsdomains.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 634 \"./neutron/floatingipdnses.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 635 \"./neutron/portdnses.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 636 \"./neutron/auto_allocated_topologies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 637 \"./neutron/bgp_speakers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 638 \"./neutron/bgp_peers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 639 \"./neutron/bgp_speaker_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 640 \"./neutron/bgp_speaker_peer_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 641 \"./neutron/bgp_speaker_dragent_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 642 \"./neutron/qospolicyrbacs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 643 \"./neutron/tags.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 644 \"./neutron/qos_dscp_marking_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 645 \"./neutron/trunks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 646 \"./neutron/subports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 647 \"./neutron/provisioningblocks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 497 \"./neutron/ovs_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 496 \"./neutron/ovs_vlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 493 \"./neutron/network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 495 \"./neutron/ovs_tunnel_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 492 \"./neutron/network_states.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 494 \"./neutron/ovs_tunnel_endpoints.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 576 \"./neutron/networkflavors.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 577 \"./neutron/routerflavors.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 648 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 623 \"./neutron/flavorserviceprofilebindings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 648 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/flavorserviceprofilebindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 623 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 649 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 648 \"./neutron/flavorserviceprofilebindings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 649 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/flavorserviceprofilebindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 648 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 650 \"./neutron/ml2_geneve_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 651 \"./neutron/ml2_geneve_endpoints.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 552 \"./neutron/cisco_n1kv_port_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 551 \"./neutron/cisco_n1kv_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 550 \"./neutron/cisco_n1kv_multi_segments.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 549 \"./neutron/cisco_provider_networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 548 \"./neutron/cisco_n1kv_trunk_segments.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 547 \"./neutron/cisco_n1kv_vmnetworks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 546 \"./neutron/cisco_n1kv_profile_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 545 \"./neutron/cisco_qos_policies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 544 \"./neutron/cisco_credentials.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 543 \"./neutron/cisco_n1kv_vlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 542 \"./neutron/cisco_n1kv_vxlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 541 \"./neutron/cisco_network_profiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 540 \"./neutron/cisco_policy_profiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 528 \"./neutron/embrane_pool_port.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 652 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 461 \"./neutron/ports.ibd\",\"./neutron/#sql-ib665.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 652 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/ports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 461 \"./neutron/#sql-ib665.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 653 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 652 \"./neutron/ports.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 653 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/ports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 652 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 654 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 460 \"./neutron/networks.ibd\",\"./neutron/#sql-ib667.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 654 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 460 \"./neutron/#sql-ib667.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 655 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 654 \"./neutron/networks.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 655 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 654 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 656 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 462 \"./neutron/subnets.ibd\",\"./neutron/#sql-ib669.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 656 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 462 \"./neutron/#sql-ib669.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 657 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 656 \"./neutron/subnets.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 657 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 656 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 658 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 490 \"./neutron/subnetpools.ibd\",\"./neutron/#sql-ib671.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 658 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnetpools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 490 \"./neutron/#sql-ib671.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 659 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 658 \"./neutron/subnetpools.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 659 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnetpools.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 658 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 660 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 479 \"./neutron/securitygroups.ibd\",\"./neutron/#sql-ib673.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 660 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygroups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 479 \"./neutron/#sql-ib673.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 661 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 660 \"./neutron/securitygroups.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 661 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygroups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 660 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 662 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 471 \"./neutron/floatingips.ibd\",\"./neutron/#sql-ib675.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 662 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/floatingips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 471 \"./neutron/#sql-ib675.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 663 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 662 \"./neutron/floatingips.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 663 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/floatingips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 662 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 664 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 470 \"./neutron/routers.ibd\",\"./neutron/#sql-ib677.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 664 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/routers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 470 \"./neutron/#sql-ib677.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 665 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 664 \"./neutron/routers.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 665 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/routers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 664 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 666 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 480 \"./neutron/securitygrouprules.ibd\",\"./neutron/#sql-ib679.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 666 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygrouprules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 480 \"./neutron/#sql-ib679.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 667 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 666 \"./neutron/securitygrouprules.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 667 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygrouprules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 666 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 516 \"./neutron/csnat_l3_agent_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 573 \"./neutron/ofcnetworkmappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 568 \"./neutron/ofcportmappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 569 \"./neutron/ofcroutermappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 572 \"./neutron/ofcfiltermappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 571 \"./neutron/ofctenantmappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 575 \"./neutron/portinfos.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 570 \"./neutron/routerproviders.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 574 \"./neutron/packetfilters.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 504 \"./neutron/ml2_network_segments.ibd\",\"./neutron/networksegments.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 668 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 506 \"./neutron/ml2_port_binding_levels.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 668 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/ml2_port_binding_levels.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 506 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 669 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 657 \"./neutron/subnets.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 669 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 657 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 670 \"./neutron/segmenthostmappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 515 \"./neutron/ml2_dvr_port_bindings.ibd\",\"./neutron/ml2_distributed_port_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 671 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 665 \"./neutron/routers.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 671 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/routers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 665 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 672 \"./neutron/subnet_service_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 673 \"./neutron/qos_minimum_bandwidth_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 674 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 505 \"./neutron/ml2_port_bindings.ibd\",\"./neutron/#sql-ib687.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 674 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/ml2_port_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 505 \"./neutron/#sql-ib687.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 675 \"./neutron/portdataplanestatuses.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 676 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 629 \"./neutron/qos_bandwidth_limit_rules.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 676 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/qos_bandwidth_limit_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 629 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 677 \"./neutron/qos_policies_default.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 678 \"./neutron/logs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 679 \"./neutron/qos_fip_policy_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 680 \"./neutron/portforwardings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 681 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 680 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 681 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 680 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 682 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 681 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 682 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 681 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 683 \"./neutron/portuplinkstatuspropagation.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 684 \"./neutron/qos_router_gw_policy_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 685 \"./neutron/network_segment_ranges.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 686 \"./neutron/securitygrouprbacs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 687 \"./neutron/conntrack_helpers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 688 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 669 \"./neutron/subnets.ibd\",\"./neutron/#sql-ib701.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 688 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 669 \"./neutron/#sql-ib701.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 689 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 655 \"./neutron/networks.ibd\",\"./neutron/#sql-ib702.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 689 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 655 \"./neutron/#sql-ib702.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 690 \"./neutron/ovn_revision_numbers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 691 \"./neutron/ovn_hash_ring.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 692 \"./neutron/network_subnet_lock.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 693 \"./neutron/subnet_dns_publish_fixed_ips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 694 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 682 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-ib707.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 694 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 682 \"./neutron/#sql-ib707.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 695 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 694 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 695 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 694 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 696 \"./neutron/dvr_fip_gateway_port_network.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 697 \"./neutron/addressscoperbacs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 698 \"./neutron/subnetpoolrbacs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 699 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 498 \"./neutron/ml2_vlan_allocations.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 699 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/ml2_vlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 498 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 700 \"./neutron/address_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 701 \"./neutron/address_associations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 702 \"./neutron/portnumaaffinitypolicies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 703 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 667 \"./neutron/securitygrouprules.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 703 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygrouprules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 667 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 704 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 700 \"./neutron/address_groups.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 704 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/address_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 700 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 705 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 704 \"./neutron/address_groups.ibd\",\"./neutron/#sql-ib718.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 705 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/address_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 704 \"./neutron/#sql-ib718.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 706 \"./neutron/portdeviceprofiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 707 \"./neutron/addressgrouprbacs.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 708 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 643 \"./neutron/tags.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 708 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/tags.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 643 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 709 \"./neutron/qos_packet_rate_limit_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 710 \"./neutron/qos_minimum_packet_rate_rules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 711 \"./neutron/local_ips.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 712 \"./neutron/local_ip_associations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 488 \"./neutron/portbindingports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 713 \"./neutron/router_ndp_proxy_state.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 714 \"./neutron/ndp_proxies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 715 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 695 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 715 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 695 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 716 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 715 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 716 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 715 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 717 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 716 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 717 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 716 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 718 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 481 \"./neutron/securitygroupportbindings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 718 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygroupportbindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 481 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 719 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 504 \"./neutron/networksegments.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 719 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networksegments.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 504 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 720 \"./neutron/porthints.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 721 \"./neutron/securitygroupdefaultrules.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 556 \"./neutron/cisco_ml2_apic_contracts.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 555 \"./neutron/cisco_ml2_apic_names.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 554 \"./neutron/cisco_ml2_apic_host_links.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 560 \"./neutron/cisco_ml2_n1kv_policy_profiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 562 \"./neutron/cisco_ml2_n1kv_port_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 563 \"./neutron/cisco_ml2_n1kv_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 564 \"./neutron/cisco_ml2_n1kv_vxlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 565 \"./neutron/cisco_ml2_n1kv_vlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 566 \"./neutron/cisco_ml2_n1kv_profile_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 507 \"./neutron/cisco_ml2_nexusport_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 513 \"./neutron/cisco_ml2_nexus_nve.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 512 \"./neutron/ml2_nexus_vxlan_mcast_groups.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 567 \"./neutron/ml2_ucsm_port_profiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 558 \"./neutron/cisco_port_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 559 \"./neutron/cisco_router_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 557 \"./neutron/cisco_hosting_devices.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 561 \"./neutron/cisco_ml2_n1kv_network_profiles.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 511 \"./neutron/ml2_nexus_vxlan_allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 581 \"./neutron/tz_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 593 \"./neutron/neutron_nsx_network_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 595 \"./neutron/neutron_nsx_security_group_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 590 \"./neutron/neutron_nsx_port_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 594 \"./neutron/neutron_nsx_router_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 582 \"./neutron/multi_provider_networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 585 \"./neutron/networkconnections.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 596 \"./neutron/networkgatewaydevicereferences.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 597 \"./neutron/networkgatewaydevices.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 584 \"./neutron/networkgateways.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 589 \"./neutron/maclearningstates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 588 \"./neutron/portqueuemappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 587 \"./neutron/networkqueuemappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 586 \"./neutron/qosqueues.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 592 \"./neutron/lsn_port.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 591 \"./neutron/lsn.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 602 \"./neutron/nsxv_router_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 607 \"./neutron/nsxv_edge_vnic_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 606 \"./neutron/nsxv_edge_dhcp_static_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 603 \"./neutron/nsxv_internal_networks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 604 \"./neutron/nsxv_internal_edges.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 609 \"./neutron/nsxv_security_group_section_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 613 \"./neutron/nsxv_rule_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 611 \"./neutron/nsxv_port_vnic_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 614 \"./neutron/nsxv_router_ext_attributes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 610 \"./neutron/nsxv_tz_network_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 612 \"./neutron/nsxv_port_index_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 605 \"./neutron/nsxv_firewall_rule_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 608 \"./neutron/nsxv_spoofguard_policy_network_mappings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 615 \"./neutron/nsxv_vdr_dhcp_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 583 \"./neutron/vcns_router_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 537 \"./neutron/brocadeports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 536 \"./neutron/brocadenetworks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 539 \"./neutron/ml2_brocadeports.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 538 \"./neutron/ml2_brocadenetworks.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 600 \"./neutron/nuage_net_partition_router_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 601 \"./neutron/nuage_provider_net_bindings.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 599 \"./neutron/nuage_subnet_l2dom_mapping.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 598 \"./neutron/nuage_net_partitions.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 553 \"./neutron/cisco_csr_identifier_map.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 722 \"./neutron/porthardwareoffloadtype.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 723 \"./neutron/porttrusted.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 724 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 626 \"./neutron/qos_policies.ibd\",\"./neutron/#sql-ib737.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 724 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/qos_policies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 626 \"./neutron/#sql-ib737.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 725 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 724 \"./neutron/qos_policies.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 725 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/qos_policies.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 724 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 726 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 719 \"./neutron/networksegments.ibd\",\"./neutron/#sql-ib739.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 726 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networksegments.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 719 \"./neutron/#sql-ib739.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 727 \"./neutron/#sql-alter-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 726 \"./neutron/networksegments.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 727 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networksegments.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 726 \"./neutron/#sql-backup-dc-419.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 467 \"./neutron/ipavailabilityranges.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 619 \"./neutron/ipamavailabilityranges.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 728 \"./placement/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 729 \"./placement/allocations.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 730 \"./placement/consumers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 731 \"./placement/inventories.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 732 \"./placement/placement_aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 733 \"./placement/projects.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 734 \"./placement/resource_classes.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 735 \"./placement/resource_provider_aggregates.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 736 \"./placement/resource_providers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 737 \"./placement/traits.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 738 \"./placement/users.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 739 \"./placement/resource_provider_traits.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 740 \"./placement/consumer_types.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 741 \"./placement/#sql-alter-dc-47a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 730 \"./placement/consumers.ibd\",\"./placement/#sql-backup-dc-47a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 741 \"./placement/#sql-alter-dc-47a.ibd\",\"./placement/consumers.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 730 \"./placement/#sql-backup-dc-47a.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 742 \"./magnum/alembic_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 743 \"./magnum/bay.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 744 \"./magnum/baymodel.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 745 \"./magnum/container.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 746 \"./magnum/node.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 747 \"./magnum/pod.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 748 \"./magnum/service.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 749 \"./magnum/replicationcontroller.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 750 \"./magnum/baylock.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 751 \"./magnum/#sql-alter-dc-511.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 748 \"./magnum/service.ibd\",\"./magnum/#sql-backup-dc-511.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 751 \"./magnum/#sql-alter-dc-511.ibd\",\"./magnum/service.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 748 \"./magnum/#sql-backup-dc-511.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 752 \"./magnum/x509keypair.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 753 \"./magnum/magnum_service.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 750 \"./magnum/baylock.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 746 \"./magnum/node.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 754 \"./magnum/quotas.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 747 \"./magnum/pod.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 751 \"./magnum/service.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 745 \"./magnum/container.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 749 \"./magnum/replicationcontroller.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 744 \"./magnum/baymodel.ibd\",\"./magnum/cluster_template.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 743 \"./magnum/bay.ibd\",\"./magnum/cluster.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 755 \"./magnum/#sql-alter-dc-511.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 744 \"./magnum/cluster_template.ibd\",\"./magnum/#sql-backup-dc-511.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 755 \"./magnum/#sql-alter-dc-511.ibd\",\"./magnum/cluster_template.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 744 \"./magnum/#sql-backup-dc-511.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 756 \"./magnum/federation.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 757 \"./magnum/nodegroup.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 758 \"./grafana/migration_log.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 759 \"./grafana/user.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 759 \"./grafana/user.ibd\",\"./grafana/user_v1.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 760 \"./grafana/user.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 759 \"./grafana/user_v1.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 761 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 760 \"./grafana/user.ibd\",\"./grafana/#sql-ib774.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 761 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/user.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 760 \"./grafana/#sql-ib774.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 762 \"./grafana/temp_user.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 762 \"./grafana/temp_user.ibd\",\"./grafana/temp_user_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 763 \"./grafana/temp_user.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 762 \"./grafana/temp_user_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 764 \"./grafana/star.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 765 \"./grafana/org.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 766 \"./grafana/org_user.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 767 \"./grafana/dashboard.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 768 \"./grafana/dashboard_tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 767 \"./grafana/dashboard.ibd\",\"./grafana/dashboard_v1.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 769 \"./grafana/dashboard.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 767 \"./grafana/dashboard_v1.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 770 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 769 \"./grafana/dashboard.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 770 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 769 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 771 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 770 \"./grafana/dashboard.ibd\",\"./grafana/#sql-ib784.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 771 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 770 \"./grafana/#sql-ib784.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 772 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 771 \"./grafana/dashboard.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 772 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 771 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 773 \"./grafana/dashboard_provisioning.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 773 \"./grafana/dashboard_provisioning.ibd\",\"./grafana/dashboard_provisioning_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 774 \"./grafana/dashboard_provisioning.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 773 \"./grafana/dashboard_provisioning_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 775 \"./grafana/data_source.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 775 \"./grafana/data_source.ibd\",\"./grafana/data_source_v1.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 776 \"./grafana/data_source.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 775 \"./grafana/data_source_v1.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 777 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 776 \"./grafana/data_source.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 777 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/data_source.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 776 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 778 \"./grafana/api_key.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 778 \"./grafana/api_key.ibd\",\"./grafana/api_key_v1.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 779 \"./grafana/api_key.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 778 \"./grafana/api_key_v1.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 780 \"./grafana/dashboard_snapshot.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 780 \"./grafana/dashboard_snapshot.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 781 \"./grafana/dashboard_snapshot.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 782 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 781 \"./grafana/dashboard_snapshot.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 782 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard_snapshot.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 781 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 783 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 782 \"./grafana/dashboard_snapshot.ibd\",\"./grafana/#sql-ib796.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 783 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard_snapshot.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 782 \"./grafana/#sql-ib796.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 784 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 783 \"./grafana/dashboard_snapshot.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 784 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard_snapshot.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 783 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 785 \"./grafana/quota.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 786 \"./grafana/plugin_setting.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 787 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 786 \"./grafana/plugin_setting.ibd\",\"./grafana/#sql-ib800.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 787 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/plugin_setting.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 786 \"./grafana/#sql-ib800.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 788 \"./grafana/session.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 789 \"./grafana/playlist.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 790 \"./grafana/playlist_item.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 791 \"./grafana/preferences.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 792 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 791 \"./grafana/preferences.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 792 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/preferences.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 791 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 793 \"./grafana/alert.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 794 \"./grafana/alert_rule_tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 795 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 794 \"./grafana/alert_rule_tag.ibd\",\"./grafana/#sql-ib808.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 795 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_rule_tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 794 \"./grafana/#sql-ib808.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 796 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 795 \"./grafana/alert_rule_tag.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 796 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_rule_tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 795 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 796 \"./grafana/alert_rule_tag.ibd\",\"./grafana/alert_rule_tag_v1.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 797 \"./grafana/alert_rule_tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 796 \"./grafana/alert_rule_tag_v1.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 798 \"./grafana/alert_notification.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 799 \"./grafana/alert_notification_journal.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 799 \"./grafana/alert_notification_journal.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 800 \"./grafana/alert_notification_state.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 801 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 793 \"./grafana/alert.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 801 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 793 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 802 \"./grafana/annotation.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 803 \"./grafana/annotation_tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 804 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 803 \"./grafana/annotation_tag.ibd\",\"./grafana/#sql-ib817.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 804 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/annotation_tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 803 \"./grafana/#sql-ib817.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 805 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 804 \"./grafana/annotation_tag.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 805 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/annotation_tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 804 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 805 \"./grafana/annotation_tag.ibd\",\"./grafana/annotation_tag_v2.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 806 \"./grafana/annotation_tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 805 \"./grafana/annotation_tag_v2.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 807 \"./grafana/test_data.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 808 \"./grafana/dashboard_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 809 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 808 \"./grafana/dashboard_version.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 809 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 808 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 810 \"./grafana/team.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 811 \"./grafana/team_member.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 812 \"./grafana/dashboard_acl.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 813 \"./grafana/tag.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 814 \"./grafana/login_attempt.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 814 \"./grafana/login_attempt.ibd\",\"./grafana/login_attempt_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 815 \"./grafana/login_attempt.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 814 \"./grafana/login_attempt_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 816 \"./grafana/user_auth.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 817 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 816 \"./grafana/user_auth.ibd\",\"./grafana/#sql-ib830.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 817 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/user_auth.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 816 \"./grafana/#sql-ib830.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 818 \"./grafana/server_lock.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 819 \"./grafana/user_auth_token.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 820 \"./grafana/cache_data.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 821 \"./grafana/short_url.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 822 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 821 \"./grafana/short_url.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 822 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/short_url.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 821 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 823 \"./grafana/alert_definition.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 824 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 823 \"./grafana/alert_definition.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 824 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_definition.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 823 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 824 \"./grafana/alert_definition.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 825 \"./grafana/alert_definition_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 826 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 825 \"./grafana/alert_definition_version.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 826 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_definition_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 825 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 826 \"./grafana/alert_definition_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 827 \"./grafana/alert_instance.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 828 \"./grafana/alert_rule.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 829 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 828 \"./grafana/alert_rule.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 829 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_rule.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 828 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 830 \"./grafana/alert_rule_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 831 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 830 \"./grafana/alert_rule_version.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 831 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_rule_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 830 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 832 \"./grafana/alert_configuration.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 833 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 832 \"./grafana/alert_configuration.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 833 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_configuration.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 832 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 834 \"./grafana/ngalert_configuration.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 835 \"./grafana/provenance_type.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 836 \"./grafana/alert_image.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 837 \"./grafana/alert_configuration_history.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 838 \"./grafana/library_element.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 839 \"./grafana/library_element_connection.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 840 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 838 \"./grafana/library_element.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 840 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/library_element.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 838 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 841 \"./grafana/data_keys.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 842 \"./grafana/secrets.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 843 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 841 \"./grafana/data_keys.ibd\",\"./grafana/#sql-ib856.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 843 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/data_keys.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 841 \"./grafana/#sql-ib856.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 844 \"./grafana/kv_store.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 845 \"./grafana/permission.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 846 \"./grafana/role.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 847 \"./grafana/team_role.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 848 \"./grafana/user_role.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 849 \"./grafana/builtin_role.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 850 \"./grafana/seed_assignment.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 851 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 850 \"./grafana/seed_assignment.ibd\",\"./grafana/#sql-ib864.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 851 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/seed_assignment.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 850 \"./grafana/#sql-ib864.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 852 \"./grafana/query_history.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 853 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 852 \"./grafana/query_history.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 853 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/query_history.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 852 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 854 \"./grafana/query_history_details.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 855 \"./grafana/query_history_star.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 856 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 855 \"./grafana/query_history_star.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 856 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/query_history_star.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 855 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 857 \"./grafana/correlation.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 857 \"./grafana/correlation.ibd\",\"./grafana/correlation_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 858 \"./grafana/correlation.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 857 \"./grafana/correlation_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 859 \"./grafana/entity_event.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 860 \"./grafana/dashboard_public_config.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 860 \"./grafana/dashboard_public_config.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 861 \"./grafana/dashboard_public_config.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 861 \"./grafana/dashboard_public_config.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 862 \"./grafana/dashboard_public_config.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 862 \"./grafana/dashboard_public_config.ibd\",\"./grafana/dashboard_public.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 863 \"./grafana/file.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 864 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 863 \"./grafana/file.ibd\",\"./grafana/#sql-ib877.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 864 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/file.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 863 \"./grafana/#sql-ib877.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 865 \"./grafana/file_meta.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 866 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 865 \"./grafana/file_meta.ibd\",\"./grafana/#sql-ib879.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 866 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/file_meta.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 865 \"./grafana/#sql-ib879.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 867 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 864 \"./grafana/file.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 867 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/file.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 864 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 868 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 851 \"./grafana/seed_assignment.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 868 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/seed_assignment.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 851 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 869 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 868 \"./grafana/seed_assignment.ibd\",\"./grafana/#sql-ib882.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 869 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/seed_assignment.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 868 \"./grafana/#sql-ib882.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 870 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 869 \"./grafana/seed_assignment.ibd\",\"./grafana/#sql-ib883.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 870 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/seed_assignment.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 869 \"./grafana/#sql-ib883.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 871 \"./grafana/folder.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 872 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 871 \"./grafana/folder.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 872 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/folder.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 871 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 873 \"./grafana/anon_device.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 874 \"./grafana/signing_key.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 875 \"./grafana/sso_setting.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 876 \"./grafana/cloud_migration.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 877 \"./grafana/cloud_migration_run.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 876 \"./grafana/cloud_migration.ibd\",\"./grafana/cloud_migration_session_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 878 \"./grafana/cloud_migration_session.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 876 \"./grafana/cloud_migration_session_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 877 \"./grafana/cloud_migration_run.ibd\",\"./grafana/cloud_migration_snapshot_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 879 \"./grafana/cloud_migration_snapshot.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 877 \"./grafana/cloud_migration_snapshot_tmp_qwerty.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 880 \"./grafana/cloud_migration_resource.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 881 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 880 \"./grafana/cloud_migration_resource.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 881 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/cloud_migration_resource.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 880 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 882 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 844 \"./grafana/kv_store.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 882 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/kv_store.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 844 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 883 \"./grafana/user_external_session.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 884 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 883 \"./grafana/user_external_session.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 884 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/user_external_session.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 883 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 885 \"./grafana/#sql-alter-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 884 \"./grafana/user_external_session.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 885 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/user_external_session.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 884 \"./grafana/#sql-backup-dc-52c.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 886 \"./grafana/alert_rule_state.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 887 \"./grafana/resource_migration_log.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 888 \"./grafana/resource.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 889 \"./grafana/resource_history.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 890 \"./grafana/resource_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 891 \"./grafana/#sql-alter-dc-563.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 890 \"./grafana/resource_version.ibd\",\"./grafana/#sql-ib904.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : rename 891 \"./grafana/#sql-alter-dc-563.ibd\",\"./grafana/resource_version.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : delete 890 \"./grafana/#sql-ib904.ibd\"\n[00] 2025-07-12 20:50:27 DDL tracking : create 892 \"./grafana/resource_blob.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 893 \"./octavia/alembic_version.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 894 \"./octavia/health_monitor_type.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 895 \"./octavia/protocol.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 896 \"./octavia/algorithm.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 897 \"./octavia/session_persistence_type.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 898 \"./octavia/provisioning_status.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 899 \"./octavia/operating_status.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 900 \"./octavia/pool.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 901 \"./octavia/health_monitor.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 902 \"./octavia/session_persistence.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 903 \"./octavia/member.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 904 \"./octavia/load_balancer.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 905 \"./octavia/vip.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 906 \"./octavia/listener.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 907 \"./octavia/sni.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 908 \"./octavia/listener_statistics.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 909 \"./octavia/amphora.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 910 \"./octavia/load_balancer_amphora.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 911 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 909 \"./octavia/amphora.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 911 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/amphora.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 909 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 910 \"./octavia/load_balancer_amphora.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 912 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 911 \"./octavia/amphora.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 912 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/amphora.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 911 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 913 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 903 \"./octavia/member.ibd\",\"./octavia/#sql-ib926.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 913 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/member.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 903 \"./octavia/#sql-ib926.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 914 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 912 \"./octavia/amphora.ibd\",\"./octavia/#sql-ib927.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 914 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/amphora.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 912 \"./octavia/#sql-ib927.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 915 \"./octavia/amphora_health.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 916 \"./octavia/lb_topology.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 917 \"./octavia/amphora_roles.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 918 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 904 \"./octavia/load_balancer.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 918 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/load_balancer.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 904 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 919 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 914 \"./octavia/amphora.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 919 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/amphora.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 914 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 920 \"./octavia/vrrp_auth_method.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 921 \"./octavia/vrrp_group.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 922 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 918 \"./octavia/load_balancer.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 922 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/load_balancer.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 918 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 923 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 906 \"./octavia/listener.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 923 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 906 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 924 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 900 \"./octavia/pool.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 924 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 900 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 925 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 913 \"./octavia/member.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 925 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/member.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 913 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 926 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 924 \"./octavia/pool.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 926 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 924 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 927 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 923 \"./octavia/listener.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 927 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 923 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 928 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 926 \"./octavia/pool.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 928 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 926 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 929 \"./octavia/l7rule_type.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 930 \"./octavia/l7rule_compare_type.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 931 \"./octavia/l7policy_action.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 932 \"./octavia/l7policy.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 933 \"./octavia/l7rule.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 934 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 908 \"./octavia/listener_statistics.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 934 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener_statistics.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 908 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 935 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 934 \"./octavia/listener_statistics.ibd\",\"./octavia/#sql-ib948.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 935 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener_statistics.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 934 \"./octavia/#sql-ib948.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 936 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 935 \"./octavia/listener_statistics.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 936 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener_statistics.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 935 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 937 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 936 \"./octavia/listener_statistics.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 937 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener_statistics.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 936 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 938 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 901 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 938 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 901 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 939 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 932 \"./octavia/l7policy.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 939 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7policy.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 932 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 940 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 933 \"./octavia/l7rule.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 940 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7rule.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 933 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 941 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 925 \"./octavia/member.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 941 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/member.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 925 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 942 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 928 \"./octavia/pool.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 942 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 928 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 943 \"./octavia/quotas.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 944 \"./octavia/amphora_build_slots.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 945 \"./octavia/amphora_build_request.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 946 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 938 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-ib959.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 946 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 938 \"./octavia/#sql-ib959.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 947 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 946 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-ib960.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 947 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 946 \"./octavia/#sql-ib960.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 948 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 947 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 948 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 947 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 949 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 939 \"./octavia/l7policy.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 949 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7policy.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 939 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 950 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 949 \"./octavia/l7policy.ibd\",\"./octavia/#sql-ib963.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 950 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7policy.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 949 \"./octavia/#sql-ib963.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 951 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 948 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-ib964.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 951 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 948 \"./octavia/#sql-ib964.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 952 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 941 \"./octavia/member.ibd\",\"./octavia/#sql-ib965.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 952 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/member.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 941 \"./octavia/#sql-ib965.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 953 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 942 \"./octavia/pool.ibd\",\"./octavia/#sql-ib966.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 953 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 942 \"./octavia/#sql-ib966.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 954 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 940 \"./octavia/l7rule.ibd\",\"./octavia/#sql-ib967.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 954 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7rule.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 940 \"./octavia/#sql-ib967.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 955 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 951 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 955 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 951 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 956 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 950 \"./octavia/l7policy.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 956 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7policy.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 950 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 957 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 954 \"./octavia/l7rule.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 957 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7rule.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 954 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 958 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 952 \"./octavia/member.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 958 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/member.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 952 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 959 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 953 \"./octavia/pool.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 959 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 953 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 960 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 957 \"./octavia/l7rule.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 960 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7rule.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 957 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 961 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 955 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 961 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 955 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 962 \"./octavia/tags.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 963 \"./octavia/flavor_profile.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 964 \"./octavia/flavor.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 965 \"./octavia/client_authentication_mode.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 966 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 927 \"./octavia/listener.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 966 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 927 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 967 \"./octavia/spares_pool.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 968 \"./octavia/listener_cidr.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 969 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 922 \"./octavia/load_balancer.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 969 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/load_balancer.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 922 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 970 \"./octavia/availability_zone_profile.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 971 \"./octavia/availability_zone.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 972 \"./octavia/#sql-alter-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 969 \"./octavia/load_balancer.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : rename 972 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/load_balancer.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : delete 969 \"./octavia/#sql-backup-dc-674.ibd\"\n[00] 2025-07-12 20:50:28 DDL tracking : create 973 \"./octavia/additional_vip.ibd\"\n[00] 2025-07-12 20:50:29 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-07-12 20:50:29 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set\n[00] 2025-07-12 20:50:29 BACKUP STAGE START\n[00] 2025-07-12 20:50:29 Acquiring BACKUP LOCKS...\n[00] 2025-07-12 20:50:29 Streaming /var/lib/mysql//aria_log_control to \n[00] 2025-07-12 20:50:29 ...done\n[00] 2025-07-12 20:50:29 Loading aria_log_control.\n[00] 2025-07-12 20:50:29 aria_log_control: last_log_number: 1\n[00] 2025-07-12 20:50:29 Start scanning aria tables.\n[00] 2025-07-12 20:50:29 Start scanning aria log files.\n[00] 2025-07-12 20:50:29 Found 1 aria log files, minimum log number 1, maximum log number 1\n[00] 2025-07-12 20:50:29 Stop scanning aria tables.\n[00] 2025-07-12 20:50:29 Streaming ./mysql/wsrep_cluster_members.ibd\n[00] 2025-07-12 20:50:29 ...done\n[00] 2025-07-12 20:50:29 Streaming ./mysql/innodb_index_stats.ibd\n[00] 2025-07-12 20:50:29 ...done\n[00] 2025-07-12 20:50:29 Streaming ./mysql/wsrep_allowlist.ibd\n[00] 2025-07-12 20:50:29 ...done\n[00] 2025-07-12 20:50:29 Streaming ./mysql/gtid_slave_pos.ibd\n[00] 2025-07-12 20:50:29 ...done\n[00] 2025-07-12 20:50:29 Streaming ./mysql/wsrep_streaming_log.ibd\n[00] 2025-07-12 20:50:29 ...done\n[00] 2025-07-12 20:50:29 Streaming ./mysql/transaction_registry.ibd\n[00] 2025-07-12 20:50:29 ...done\n[00] 2025-07-12 20:50:29 Streaming ./mysql/innodb_table_stats.ibd\n[00] 2025-07-12 20:50:29 ...done\n[00] 2025-07-12 20:50:29 Streaming ./mysql/wsrep_cluster.ibd\n[00] 2025-07-12 20:50:29 ...done\n[00] 2025-07-12 20:50:29 Streaming ibdata1\n[00] 2025-07-12 20:50:30 ...done\n[00] 2025-07-12 20:50:30 aria table file ./sys/sys_config.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./sys/sys_config.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/plugin.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/plugin.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/servers.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/servers.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/global_priv.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/global_priv.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_leap_second.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_leap_second.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_transition_type.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_transition_type.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/proc.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/proc.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/event.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/event.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/func.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/func.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/procs_priv.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/procs_priv.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/tables_priv.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/tables_priv.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/columns_priv.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/columns_priv.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_name.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_name.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/roles_mapping.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/roles_mapping.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_transition.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_transition.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/db.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/db.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/proxies_priv.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/proxies_priv.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 Start copying aria log file tail: /var/lib/mysql//aria_log.00000001\n[00] 2025-07-12 20:50:30 Stop copying aria log file tail: /var/lib/mysql//aria_log.00000001, copied 425984 bytes\n[00] 2025-07-12 20:50:30 BACKUP STAGE FLUSH\n[00] 2025-07-12 20:50:30 Start scanning common engine tables, need backup locks: 0, collect log and stat tables: 1\n[00] 2025-07-12 20:50:30 Log table found: mysql.slow_log\n[00] 2025-07-12 20:50:30 Collect log table file: ./mysql/slow_log.CSV\n[00] 2025-07-12 20:50:30 Log table found: mysql.general_log\n[00] 2025-07-12 20:50:30 Collect log table file: ./mysql/general_log.CSM\n[00] 2025-07-12 20:50:30 Collect log table file: ./mysql/slow_log.CSM\n[00] 2025-07-12 20:50:30 Collect log table file: ./mysql/general_log.CSV\n[00] 2025-07-12 20:50:30 Stop scanning common engine tables\n[00] 2025-07-12 20:50:30 Start copying aria log file tail: /var/lib/mysql//aria_log.00000001\n[00] 2025-07-12 20:50:30 Stop copying aria log file tail: /var/lib/mysql//aria_log.00000001, copied 0 bytes\n[00] 2025-07-12 20:50:30 aria table file ./mysql/help_topic.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/help_topic.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/help_keyword.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/help_keyword.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/help_category.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/help_category.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/help_relation.MAI is copied successfully.\n[00] 2025-07-12 20:50:30 aria table file ./mysql/help_relation.MAD is copied successfully.\n[00] 2025-07-12 20:50:30 Start scanning common engine tables, need backup locks: 1, collect log and stat tables: 0\n[00] 2025-07-12 20:50:30 Stop scanning common engine tables\n[00] 2025-07-12 20:50:30 Starting to backup non-InnoDB tables and files\n[01] 2025-07-12 20:50:30 Streaming ./barbican/project_certificate_authorities.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/transport_keys.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_user_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_stores.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/order_barbican_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/certificate_authority_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_acl_users.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/container_acl_users.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/project_quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/projects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/order_plugin_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_consumer_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/container_secret.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/kek_data.frm to \n[00] 2025-07-12 20:50:30 Copied file ./mysql/general_log.CSV for log table `mysql`.`general_log`, 0 bytes\n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/container_acls.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/certificate_authorities.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/secrets.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_store_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/order_retry_tasks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/project_secret_store.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/encrypted_data.frm to \n[00] 2025-07-12 20:50:30 Copied file ./mysql/slow_log.CSV for log table `mysql`.`slow_log`, 0 bytes\n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/orders.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_acls.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/preferred_certificate_authorities.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/containers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./barbican/container_consumer_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/sensitive_config.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/assignment.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/system_assignment.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/local_user.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/user_group_membership.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/revocation_event.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/application_credential_role.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/project_tag.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/user.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/request_token.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/endpoint_group.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/group.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/user_option.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/role.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/implied_role.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/application_credential_access_rule.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/nonlocal_user.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/service_provider.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/trust_role.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/identity_provider.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/federation_protocol.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/region.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/policy_association.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/whitelisted_config.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/registered_limit.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/id_mapping.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/access_token.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/endpoint.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/application_credential.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/token.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/project_endpoint.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/federated_user.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/project.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/limit.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/policy.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/project_option.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/project_endpoint_group.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/idp_remote_ids.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/password.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/trust.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/config_register.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/access_rule.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/role_option.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/mapping.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/service.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/credential.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/consumer.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./keystone/expiring_user_group_membership.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./magnum/federation.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./magnum/nodegroup.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./magnum/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./magnum/cluster.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./magnum/x509keypair.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./magnum/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./magnum/cluster_template.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./magnum/quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./magnum/magnum_service.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia_persistence/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/lb_topology.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/session_persistence_type.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/health_monitor.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/health_monitor_type.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/amphora_health.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/provisioning_status.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/l7rule_compare_type.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/l7policy_action.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/flavor_profile.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/l7rule.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/amphora_build_slots.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/tags.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/availability_zone_profile.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/listener_cidr.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/availability_zone.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/session_persistence.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/vrrp_group.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/pool.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/spares_pool.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/load_balancer.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/vrrp_auth_method.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/amphora_roles.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/amphora_build_request.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/algorithm.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/l7rule_type.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/operating_status.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/amphora.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/sni.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/listener_statistics.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/protocol.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/l7policy.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/additional_vip.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/listener.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/flavor.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/client_authentication_mode.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/vip.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./octavia/member.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/host_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/resource_provider_traits.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/users.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/aggregate_hosts.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/traits.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/flavor_projects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/aggregate_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/request_specs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/project_user_quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/projects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/flavor_extra_specs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/key_pairs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/instance_groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/placement_aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/build_requests.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/resource_provider_aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/instance_group_policy.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/inventories.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/allocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/resource_providers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/consumers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/instance_group_member.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/quota_usages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/reservations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/cell_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/resource_classes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/quota_classes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/instance_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_api/flavors.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_block_device_mapping.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_security_groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_snapshots.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_cells.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_faults.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/migrations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_system_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/share_mapping.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_migrations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_type_extra_specs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_volume_usage_cache.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_extra.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_volume_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_fixed_ips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_faults.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/security_group_instance_association.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_info_caches.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_actions.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/block_device_mapping.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/compute_nodes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instances.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_compute_nodes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/aggregate_hosts.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/agent_builds.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_pci_devices.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/aggregate_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_group_member.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/cells.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/services.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/floating_ips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/volume_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_snapshot_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_quota_usages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_actions_events.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/project_user_quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/virtual_interfaces.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_floating_ips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_aggregate_hosts.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/security_groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_type_projects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_certificates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_agent_builds.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/tags.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_extra.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_reservations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_actions_events.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_security_group_instance_association.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/key_pairs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/console_pools.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_quota_classes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_dns_domains.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_system_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/snapshot_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/fixed_ips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_console_pools.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_info_caches.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_type_extra_specs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_consoles.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/snapshots.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_actions.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_security_group_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_key_pairs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/pci_devices.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_bw_usage_cache.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_aggregate_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/resource_provider_aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_group_policy.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_provider_fw_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/task_log.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/certificates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_s3_images.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/security_group_default_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/bw_usage_cache.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/s3_images.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/inventories.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/dns_domains.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/allocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/provider_fw_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/resource_providers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/console_auth_tokens.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_networks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_group_member.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_virtual_interfaces.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/quota_usages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_security_group_default_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/reservations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/volume_usage_cache.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instances.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/networks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/instance_type_projects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/consoles.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_task_log.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_services.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_group_policy.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/quota_classes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/security_group_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_project_user_quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/image_members.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_objects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/tasks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_resource_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_tags.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/node_reference.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_namespaces.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/task_info.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_namespace_resource_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/image_properties.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/cached_images.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/images.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_properties.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/image_locations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./glance/image_tags.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./performance_schema/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/dnsnameservers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ports.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/consistencyhashes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_fip_policy_bindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/vips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/default_security_group.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ikepolicies.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ipamsubnets.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/dvr_fip_gateway_port_network.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/portdeviceprofiles.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/extradhcpopts.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/address_scopes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/members.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/routerroutes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/routers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/portnumaaffinitypolicies.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ha_router_networks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_bandwidth_limit_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/bgp_speakers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ipamallocationpools.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_gre_allocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ovn_hash_ring.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ovn_revision_numbers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/provisioningblocks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/routerports.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_vlan_allocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ha_router_agent_port_bindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/quotausages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/router_extra_attributes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/bgp_speaker_peer_bindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/resourcedeltas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/nexthops.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/arista_provisioned_tenants.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/sessionpersistences.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/local_ips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/dvr_host_macs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/porthints.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ipallocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/firewalls.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_network_policy_bindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/subnetroutes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/subnet_service_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/meteringlabels.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_minimum_bandwidth_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/subnets.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/tags.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/subnetpools.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/portdnses.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_router_gw_policy_bindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_geneve_endpoints.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/subports.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/router_ndp_proxy_state.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/flavorserviceprofilebindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ndp_proxies.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/vpnservices.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/portsecuritybindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/securitygrouprules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/subnet_dns_publish_fixed_ips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ipamallocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_geneve_allocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/address_groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/networksecuritybindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/addressscoperbacs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/network_segment_ranges.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_policies_default.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/bgp_peers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/pools.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/networksegments.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ipallocationpools.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/networkdnsdomains.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/floatingips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/trunks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/arista_provisioned_vms.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/network_subnet_lock.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_port_binding_levels.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/networkdhcpagentbindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/auto_allocated_topologies.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/addressgrouprbacs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/allowedaddresspairs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/securitygroupdefaultrules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_vxlan_endpoints.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/porthardwareoffloadtype.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/local_ip_associations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/routerl3agentbindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/healthmonitors.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/securitygroups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/externalnetworks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ipsec_site_connections.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/bgp_speaker_dragent_bindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/poolloadbalanceragentbindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/bgp_speaker_network_bindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/arista_provisioned_nets.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/standardattributes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/logs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/firewall_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_port_bindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_gre_endpoints.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/address_associations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/poolmonitorassociations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/subnetpoolprefixes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/meteringlabelrules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_packet_rate_limit_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/floatingipdnses.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qospolicyrbacs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/agents.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/routerrules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/portforwardings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_distributed_port_bindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/reservations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/porttrusted.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/poolstatisticss.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/firewall_policies.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/securitygroupportbindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/networks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/conntrack_helpers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ipsecpeercidrs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_policies.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ha_router_vrid_allocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ipsecpolicies.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/portuplinkstatuspropagation.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/providerresourceassociations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_flat_allocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/serviceprofiles.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/subnetpoolrbacs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_dscp_marking_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_minimum_packet_rate_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/portdataplanestatuses.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_port_policy_bindings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/securitygrouprbacs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_vxlan_allocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/flavors.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/segmenthostmappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./neutron/networkrbacs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/service_statuses.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/records.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/blacklists.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/zone_attributes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/pool_target_options.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/pool_ns_records.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/zone_transfer_requests.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/zone_tasks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/zone_transfer_accepts.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/tlds.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/tsigkeys.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/pools.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/zones.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/zone_masters.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/recordsets.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/pool_nameservers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/pool_also_notifies.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/pool_targets.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/pool_target_masters.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/pool_attributes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./designate/shared_zones.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/consistencygroups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/transfers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/image_volume_cache_entries.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/group_snapshots.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/volumes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_type_extra_specs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/default_volume_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/workers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/clusters.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/snapshot_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/services.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/backup_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_admin_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_type_projects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_glance_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/group_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/quality_of_service_specs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/group_volume_type_mapping.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/backups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/attachment_specs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/snapshots.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/messages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/group_type_projects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/cgsnapshots.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/encryption.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/quota_usages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/reservations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/group_type_specs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/driver_initiator_data.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/quota_classes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_attachment.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_block_device_mapping.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_security_groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_snapshots.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_cells.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_faults.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/migrations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_system_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/share_mapping.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_migrations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_type_extra_specs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_volume_usage_cache.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_extra.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_volume_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_fixed_ips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_faults.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/security_group_instance_association.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_info_caches.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_actions.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/block_device_mapping.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/compute_nodes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instances.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_compute_nodes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/aggregate_hosts.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/agent_builds.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_pci_devices.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/aggregate_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_group_member.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/cells.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/services.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/floating_ips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/volume_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_snapshot_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_quota_usages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_actions_events.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/project_user_quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/virtual_interfaces.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_floating_ips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_aggregate_hosts.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/security_groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_type_projects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_certificates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_agent_builds.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/tags.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_extra.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_reservations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_actions_events.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_security_group_instance_association.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/key_pairs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/console_pools.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_quota_classes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_dns_domains.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_system_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/snapshot_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_id_mappings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/fixed_ips.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_console_pools.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_info_caches.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_type_extra_specs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_consoles.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/snapshots.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_actions.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_security_group_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_key_pairs.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/pci_devices.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_bw_usage_cache.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_aggregate_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/resource_provider_aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_group_policy.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_provider_fw_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_groups.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/task_log.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/certificates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_s3_images.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/security_group_default_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/bw_usage_cache.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/s3_images.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/inventories.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/dns_domains.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/allocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/provider_fw_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/resource_providers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/console_auth_tokens.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_networks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_group_member.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_virtual_interfaces.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/quota_usages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_security_group_default_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/reservations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_metadata.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/volume_usage_cache.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instances.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/networks.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_type_projects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/consoles.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_task_log.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_services.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_group_policy.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/quota_classes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/security_group_rules.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_project_user_quotas.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/resource_provider_traits.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/alembic_version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/users.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/traits.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/consumer_types.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/projects.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/placement_aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/resource_provider_aggregates.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/inventories.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/allocations.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/resource_providers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/consumers.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./placement/resource_classes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024waits_by_host_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/host_summary_by_statement_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/sys_config.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/schema_table_lock_waits.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/statement_analysis.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/host_summary_by_statement_type.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024io_global_by_wait_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/user_summary.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/schema_unused_indexes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/statements_with_full_table_scans.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024memory_by_thread_by_current_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/session_ssl_status.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024schema_table_statistics_with_buffer.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/innodb_buffer_stats_by_schema.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024memory_by_host_by_current_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/host_summary_by_stages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/user_summary_by_stages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024schema_table_lock_waits.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/schema_auto_increment_columns.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024statement_analysis.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024statements_with_full_table_scans.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024ps_digest_avg_latency_distribution.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024io_global_by_file_by_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/db.opt to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/schema_index_statistics.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/schema_table_statistics_with_buffer.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/session.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024ps_digest_95th_percentile_by_avg_us.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/wait_classes_global_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/innodb_lock_waits.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/host_summary.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024memory_global_by_current_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary_by_file_io.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/waits_global_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/io_global_by_file_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/io_global_by_file_by_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/io_global_by_wait_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/user_summary_by_file_io.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024schema_flattened_keys.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/waits_by_host_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/statements_with_errors_or_warnings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/schema_redundant_indexes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024io_global_by_wait_by_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024innodb_buffer_stats_by_schema.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/io_global_by_wait_by_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/statements_with_sorting.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary_by_statement_type.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024host_summary_by_file_io.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary_by_statement_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024host_summary_by_statement_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024memory_global_total.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024io_global_by_file_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/waits_by_user_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024waits_by_user_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/ps_check_lost_instrumentation.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/wait_classes_global_by_avg_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024innodb_buffer_stats_by_table.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/memory_by_user_by_current_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/schema_table_statistics.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024ps_schema_table_statistics_io.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024innodb_lock_waits.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/schema_tables_with_full_table_scans.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/host_summary_by_file_io.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/version.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/statements_with_temp_tables.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/user_summary_by_statement_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024statements_with_errors_or_warnings.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024host_summary_by_statement_type.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/user_summary_by_file_io_type.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024waits_global_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/memory_by_host_by_current_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024host_summary_by_stages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024memory_by_user_by_current_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024schema_table_statistics.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/memory_by_thread_by_current_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024statements_with_runtimes_in_95th_percentile.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary_by_stages.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary_by_file_io_type.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024schema_index_statistics.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024host_summary_by_file_io_type.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/latest_file_io.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024latest_file_io.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024io_by_thread_by_latency.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/memory_global_total.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/memory_global_by_current_bytes.frm to \n[01] 2025-07-12 20:50:30 ...done\n[01] 2025-07-12 20:50:30 Streaming ./sys/statements_with_runtimes_in_95th_percentile.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/user_summary_by_statement_type.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024schema_tables_with_full_table_scans.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/innodb_buffer_stats_by_table.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024wait_classes_global_by_latency.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/processlist.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024wait_classes_global_by_avg_latency.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024session.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024processlist.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024statements_with_temp_tables.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/host_summary_by_file_io_type.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024statements_with_sorting.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/io_by_thread_by_latency.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/metrics.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/schema_object_overview.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024host_summary.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/help_relation.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/index_stats.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/help_keyword.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/columns_priv.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/time_zone.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/column_stats.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/db.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/help_category.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/time_zone_leap_second.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/plugin.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/event.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/user.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/roles_mapping.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/procs_priv.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/global_priv.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/db.opt to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/innodb_index_stats.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/general_log.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/gtid_slave_pos.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/time_zone_transition.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/func.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/proc.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/time_zone_name.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/servers.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/transaction_registry.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/proxies_priv.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/help_topic.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/innodb_table_stats.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/slow_log.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/wsrep_allowlist.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/time_zone_transition_type.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/tables_priv.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/wsrep_cluster.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/wsrep_cluster_members.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/wsrep_streaming_log.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./mysql/table_stats.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./horizon/django_migrations.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./horizon/auth_group.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./horizon/django_session.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./horizon/db.opt to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./horizon/auth_group_permissions.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./horizon/auth_permission.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./horizon/django_content_type.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/test_data.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/login_attempt.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/correlation.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/query_history_details.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/cache_data.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/data_source.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/resource_version.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_configuration_history.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_notification.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/file_meta.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_rule_tag.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_version.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/plugin_setting.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/user.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/sso_setting.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/user_external_session.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/resource.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/db.opt to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/org.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/role.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/session.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/short_url.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_public.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/query_history.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/entity_event.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_instance.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/resource_blob.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/folder.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_acl.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/user_auth_token.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/tag.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_image.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/team_member.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/kv_store.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/user_auth.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_rule.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/data_keys.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/temp_user.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_configuration.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/org_user.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/builtin_role.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/resource_history.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/migration_log.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/secrets.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/query_history_star.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/star.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/playlist_item.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/library_element_connection.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/user_role.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/annotation_tag.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_rule_version.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/team.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/permission.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/quota.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/file.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/api_key.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/library_element.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/team_role.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_tag.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_rule_state.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_provisioning.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/server_lock.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/cloud_migration_session.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/ngalert_configuration.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/resource_migration_log.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_snapshot.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/anon_device.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/annotation.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/signing_key.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/cloud_migration_resource.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/preferences.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/playlist.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/seed_assignment.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_notification_state.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/provenance_type.frm to \n[01] 2025-07-12 20:50:31 ...done\n[01] 2025-07-12 20:50:31 Streaming ./grafana/cloud_migration_snapshot.frm to \n[01] 2025-07-12 20:50:31 ...done\n[00] 2025-07-12 20:50:31 Finished backing up non-InnoDB tables and files\n[00] 2025-07-12 20:50:31 Waiting for log copy thread to read lsn 42893985\n[00] 2025-07-12 20:53:18 Retrying read of log at LSN=42850134\n[00] 2025-07-12 20:53:19 Retrying read of log at LSN=42850134\n[00] 2025-07-12 20:53:21 Retrying read of log at LSN=42850134\n[00] 2025-07-12 20:53:22 Retrying read of log at LSN=42850134\n[00] 2025-07-12 20:53:22 Was only able to copy log from 60383 to 42850134, not 42893985; try increasing innodb_log_file_size\nmariabackup: Stopping log copying thread.[00] 2025-07-12 20:53:22 Retrying read of log at LSN=42850134\n\n", "stderr_lines": ["INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json", "INFO:__main__:Validating config file", "INFO:__main__:Kolla config strategy set to: COPY_ALWAYS", "INFO:__main__:Copying /etc/mysql/my.cnf to /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying permissions from /etc/mysql/my.cnf onto /etc/kolla/defaults/etc/mysql/my.cnf", "INFO:__main__:Copying service configuration files", "INFO:__main__:Deleting /etc/mysql/my.cnf", "INFO:__main__:Copying /var/lib/kolla/config_files/my.cnf to /etc/mysql/my.cnf", "INFO:__main__:Setting permission for /etc/mysql/my.cnf", "INFO:__main__:Writing out command to execute", "INFO:__main__:Setting permission for /var/log/kolla/mariadb", "INFO:__main__:Setting permission for /backup", "[00] 2025-07-12 20:50:19 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-07-12 20:50:19 Using server version 10.11.13-MariaDB-deb12-log", "mariabackup based on MariaDB server 10.11.13-MariaDB debian-linux-gnu (x86_64)", "[00] 2025-07-12 20:50:19 uses posix_fadvise().", "[00] 2025-07-12 20:50:19 cd to /var/lib/mysql/", "[00] 2025-07-12 20:50:19 open files limit requested 0, set to 1048576", "[00] 2025-07-12 20:50:19 mariabackup: using the following InnoDB configuration:", "[00] 2025-07-12 20:50:19 innodb_data_home_dir = ", "[00] 2025-07-12 20:50:19 innodb_data_file_path = ibdata1:12M:autoextend", "[00] 2025-07-12 20:50:19 innodb_log_group_home_dir = ./", "[00] 2025-07-12 20:50:19 InnoDB: Using liburing", "2025-07-12 20:50:19 0 [Note] InnoDB: Number of transaction pools: 1", "mariabackup: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).", "2025-07-12 20:50:19 0 [Warning] InnoDB: liburing disabled: falling back to innodb_use_native_aio=OFF", "2025-07-12 20:50:19 0 [Note] InnoDB: Memory-mapped log (block size=512 bytes)", "[00] 2025-07-12 20:50:19 mariabackup: Generating a list of tablespaces", "[00] 2025-07-12 20:50:27 DDL tracking : create 9 \"./horizon/django_migrations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 10 \"./horizon/django_content_type.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 11 \"./horizon/#sql-alter-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 10 \"./horizon/django_content_type.ibd\",\"./horizon/#sql-ib24.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 11 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/django_content_type.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 10 \"./horizon/#sql-ib24.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 12 \"./horizon/auth_permission.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 13 \"./horizon/auth_group.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 14 \"./horizon/auth_group_permissions.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 15 \"./horizon/#sql-alter-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 12 \"./horizon/auth_permission.ibd\",\"./horizon/#sql-backup-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 15 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/auth_permission.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 12 \"./horizon/#sql-backup-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 16 \"./horizon/#sql-alter-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 14 \"./horizon/auth_group_permissions.ibd\",\"./horizon/#sql-backup-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 16 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/auth_group_permissions.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 14 \"./horizon/#sql-backup-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 17 \"./horizon/#sql-alter-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 16 \"./horizon/auth_group_permissions.ibd\",\"./horizon/#sql-backup-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 17 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/auth_group_permissions.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 16 \"./horizon/#sql-backup-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 18 \"./horizon/#sql-alter-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 15 \"./horizon/auth_permission.ibd\",\"./horizon/#sql-backup-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 18 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/auth_permission.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 15 \"./horizon/#sql-backup-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 19 \"./horizon/#sql-alter-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 13 \"./horizon/auth_group.ibd\",\"./horizon/#sql-backup-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 19 \"./horizon/#sql-alter-dc-7b.ibd\",\"./horizon/auth_group.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 13 \"./horizon/#sql-backup-dc-7b.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 20 \"./horizon/django_session.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 21 \"./keystone/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 22 \"./keystone/application_credential.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 23 \"./keystone/assignment.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 24 \"./keystone/access_rule.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 25 \"./keystone/config_register.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 26 \"./keystone/consumer.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 27 \"./keystone/credential.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 28 \"./keystone/group.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 29 \"./keystone/id_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 30 \"./keystone/identity_provider.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 31 \"./keystone/idp_remote_ids.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 32 \"./keystone/mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 33 \"./keystone/policy.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 34 \"./keystone/policy_association.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 35 \"./keystone/project.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 36 \"./keystone/project_endpoint.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 37 \"./keystone/project_option.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 38 \"./keystone/project_tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 39 \"./keystone/region.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 40 \"./keystone/registered_limit.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 41 \"./keystone/request_token.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 42 \"./keystone/revocation_event.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 43 \"./keystone/role.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 44 \"./keystone/role_option.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 45 \"./keystone/sensitive_config.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 46 \"./keystone/service.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 47 \"./keystone/service_provider.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 48 \"./keystone/system_assignment.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 49 \"./keystone/token.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 50 \"./keystone/trust.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 51 \"./keystone/trust_role.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 52 \"./keystone/user.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 53 \"./keystone/user_group_membership.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 54 \"./keystone/user_option.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 55 \"./keystone/whitelisted_config.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 56 \"./keystone/access_token.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 57 \"./keystone/application_credential_role.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 58 \"./keystone/application_credential_access_rule.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 59 \"./keystone/endpoint.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 60 \"./keystone/endpoint_group.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 61 \"./keystone/expiring_user_group_membership.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 62 \"./keystone/federation_protocol.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 63 \"./keystone/implied_role.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 64 \"./keystone/limit.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 65 \"./keystone/local_user.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 66 \"./keystone/nonlocal_user.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 67 \"./keystone/password.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 68 \"./keystone/project_endpoint_group.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 69 \"./keystone/federated_user.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 70 \"./nova_api/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 71 \"./nova_api/cell_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 72 \"./nova_api/host_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 73 \"./nova_api/instance_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 74 \"./nova_api/flavors.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 75 \"./nova_api/flavor_extra_specs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 76 \"./nova_api/flavor_projects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 77 \"./nova_api/request_specs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 78 \"./nova_api/build_requests.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 79 \"./nova_api/key_pairs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 80 \"./nova_api/projects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 81 \"./nova_api/users.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 82 \"./nova_api/resource_classes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 83 \"./nova_api/resource_providers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 84 \"./nova_api/inventories.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 85 \"./nova_api/traits.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 86 \"./nova_api/allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 87 \"./nova_api/consumers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 88 \"./nova_api/resource_provider_aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 89 \"./nova_api/resource_provider_traits.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 90 \"./nova_api/placement_aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 91 \"./nova_api/aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 92 \"./nova_api/aggregate_hosts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 93 \"./nova_api/aggregate_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 94 \"./nova_api/instance_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 95 \"./nova_api/instance_group_policy.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 96 \"./nova_api/instance_group_member.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 97 \"./nova_api/quota_classes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 98 \"./nova_api/quota_usages.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 99 \"./nova_api/quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 100 \"./nova_api/project_user_quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 101 \"./nova_api/reservations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 102 \"./nova_cell0/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 103 \"./nova_cell0/instances.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 104 \"./nova_cell0/agent_builds.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 105 \"./nova_cell0/aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 106 \"./nova_cell0/aggregate_hosts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 107 \"./nova_cell0/aggregate_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 108 \"./nova_cell0/allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 109 \"./nova_cell0/block_device_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 110 \"./nova_cell0/bw_usage_cache.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 111 \"./nova_cell0/cells.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 112 \"./nova_cell0/certificates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 113 \"./nova_cell0/compute_nodes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 114 \"./nova_cell0/console_auth_tokens.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 115 \"./nova_cell0/console_pools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 116 \"./nova_cell0/consoles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 117 \"./nova_cell0/dns_domains.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 118 \"./nova_cell0/fixed_ips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 119 \"./nova_cell0/floating_ips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 120 \"./nova_cell0/instance_faults.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 121 \"./nova_cell0/instance_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 122 \"./nova_cell0/instance_info_caches.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 123 \"./nova_cell0/instance_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 124 \"./nova_cell0/instance_group_policy.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 125 \"./nova_cell0/instance_group_member.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 126 \"./nova_cell0/instance_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 127 \"./nova_cell0/instance_system_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 128 \"./nova_cell0/instance_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 129 \"./nova_cell0/instance_type_extra_specs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 130 \"./nova_cell0/instance_type_projects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 131 \"./nova_cell0/instance_actions.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 132 \"./nova_cell0/instance_actions_events.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 133 \"./nova_cell0/instance_extra.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 134 \"./nova_cell0/inventories.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 135 \"./nova_cell0/key_pairs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 136 \"./nova_cell0/migrations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 137 \"./nova_cell0/networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 138 \"./nova_cell0/pci_devices.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 139 \"./nova_cell0/provider_fw_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 140 \"./nova_cell0/quota_classes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 141 \"./nova_cell0/quota_usages.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 142 \"./nova_cell0/quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 143 \"./nova_cell0/project_user_quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 144 \"./nova_cell0/reservations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 145 \"./nova_cell0/resource_providers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 146 \"./nova_cell0/resource_provider_aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 147 \"./nova_cell0/s3_images.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 148 \"./nova_cell0/security_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 149 \"./nova_cell0/security_group_instance_association.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 150 \"./nova_cell0/security_group_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 151 \"./nova_cell0/security_group_default_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 152 \"./nova_cell0/services.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 153 \"./nova_cell0/snapshot_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 154 \"./nova_cell0/snapshots.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 155 \"./nova_cell0/tags.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 156 \"./nova_cell0/task_log.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 157 \"./nova_cell0/virtual_interfaces.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 158 \"./nova_cell0/volume_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 159 \"./nova_cell0/volume_usage_cache.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 160 \"./nova_cell0/#sql-alter-dc-11a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 145 \"./nova_cell0/resource_providers.ibd\",\"./nova_cell0/#sql-backup-dc-11a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 160 \"./nova_cell0/#sql-alter-dc-11a.ibd\",\"./nova_cell0/resource_providers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 145 \"./nova_cell0/#sql-backup-dc-11a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 161 \"./nova_cell0/shadow_agent_builds.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 162 \"./nova_cell0/shadow_aggregate_hosts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 163 \"./nova_cell0/shadow_aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 164 \"./nova_cell0/shadow_aggregate_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 165 \"./nova_cell0/shadow_alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 166 \"./nova_cell0/shadow_block_device_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 167 \"./nova_cell0/shadow_instances.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 168 \"./nova_cell0/shadow_bw_usage_cache.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 169 \"./nova_cell0/shadow_cells.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 170 \"./nova_cell0/shadow_certificates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 171 \"./nova_cell0/shadow_compute_nodes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 172 \"./nova_cell0/shadow_console_pools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 173 \"./nova_cell0/shadow_consoles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 174 \"./nova_cell0/shadow_dns_domains.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 175 \"./nova_cell0/shadow_fixed_ips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 176 \"./nova_cell0/shadow_floating_ips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 177 \"./nova_cell0/shadow_instance_actions.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 178 \"./nova_cell0/shadow_instance_actions_events.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 179 \"./nova_cell0/shadow_instance_extra.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 180 \"./nova_cell0/shadow_instance_faults.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 181 \"./nova_cell0/shadow_instance_group_member.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 182 \"./nova_cell0/shadow_instance_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 183 \"./nova_cell0/shadow_instance_group_policy.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 184 \"./nova_cell0/shadow_instance_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 185 \"./nova_cell0/shadow_instance_info_caches.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 186 \"./nova_cell0/shadow_instance_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 187 \"./nova_cell0/shadow_instance_system_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 188 \"./nova_cell0/shadow_instance_type_extra_specs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 189 \"./nova_cell0/shadow_instance_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 190 \"./nova_cell0/shadow_instance_type_projects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 191 \"./nova_cell0/shadow_key_pairs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 192 \"./nova_cell0/shadow_migrations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 193 \"./nova_cell0/shadow_networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 194 \"./nova_cell0/shadow_pci_devices.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 195 \"./nova_cell0/shadow_project_user_quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 196 \"./nova_cell0/shadow_provider_fw_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 197 \"./nova_cell0/shadow_quota_classes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 198 \"./nova_cell0/shadow_quota_usages.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 199 \"./nova_cell0/shadow_quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 200 \"./nova_cell0/shadow_reservations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 201 \"./nova_cell0/shadow_s3_images.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 202 \"./nova_cell0/shadow_security_group_default_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 203 \"./nova_cell0/shadow_security_group_instance_association.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 204 \"./nova_cell0/shadow_security_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 205 \"./nova_cell0/shadow_security_group_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 206 \"./nova_cell0/shadow_services.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 207 \"./nova_cell0/shadow_snapshot_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 208 \"./nova_cell0/shadow_snapshots.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 209 \"./nova_cell0/shadow_task_log.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 210 \"./nova_cell0/shadow_virtual_interfaces.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 211 \"./nova_cell0/shadow_volume_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 212 \"./nova_cell0/shadow_volume_usage_cache.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 213 \"./nova_cell0/share_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 214 \"./cinder/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 215 \"./cinder/services.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 216 \"./cinder/consistencygroups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 217 \"./cinder/cgsnapshots.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 218 \"./cinder/groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 219 \"./cinder/group_snapshots.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 220 \"./cinder/volumes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 221 \"./cinder/volume_attachment.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 222 \"./cinder/attachment_specs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 223 \"./cinder/snapshots.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 224 \"./cinder/snapshot_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 225 \"./cinder/quality_of_service_specs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 226 \"./cinder/volume_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 227 \"./cinder/volume_type_projects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 228 \"./cinder/volume_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 229 \"./cinder/volume_type_extra_specs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 230 \"./cinder/quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 231 \"./cinder/quota_classes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 232 \"./cinder/quota_usages.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 233 \"./cinder/reservations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 234 \"./cinder/volume_glance_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 235 \"./cinder/backups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 236 \"./cinder/backup_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 237 \"./cinder/transfers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 238 \"./cinder/encryption.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 239 \"./cinder/volume_admin_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 240 \"./cinder/driver_initiator_data.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 241 \"./cinder/image_volume_cache_entries.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 242 \"./cinder/messages.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 243 \"./cinder/clusters.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 244 \"./cinder/workers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 245 \"./cinder/group_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 246 \"./cinder/group_type_specs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 247 \"./cinder/group_type_projects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 248 \"./cinder/group_volume_type_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 249 \"./cinder/default_volume_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 250 \"./cinder/#sql-alter-dc-11a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 241 \"./cinder/image_volume_cache_entries.ibd\",\"./cinder/#sql-ib263.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 250 \"./cinder/#sql-alter-dc-11a.ibd\",\"./cinder/image_volume_cache_entries.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 241 \"./cinder/#sql-ib263.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 251 \"./cinder/#sql-alter-dc-11a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 244 \"./cinder/workers.ibd\",\"./cinder/#sql-backup-dc-11a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 251 \"./cinder/#sql-alter-dc-11a.ibd\",\"./cinder/workers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 244 \"./cinder/#sql-backup-dc-11a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 252 \"./cinder/#sql-alter-dc-11a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 220 \"./cinder/volumes.ibd\",\"./cinder/#sql-ib265.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 252 \"./cinder/#sql-alter-dc-11a.ibd\",\"./cinder/volumes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 220 \"./cinder/#sql-ib265.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 253 \"./cinder/#sql-alter-dc-11a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 223 \"./cinder/snapshots.ibd\",\"./cinder/#sql-ib266.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 253 \"./cinder/#sql-alter-dc-11a.ibd\",\"./cinder/snapshots.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 223 \"./cinder/#sql-ib266.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 254 \"./glance/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 255 \"./glance/images.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 256 \"./glance/image_properties.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 257 \"./glance/image_locations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 258 \"./glance/image_members.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 259 \"./glance/image_tags.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 260 \"./glance/tasks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 261 \"./glance/task_info.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 262 \"./glance/metadef_namespaces.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 263 \"./glance/metadef_resource_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 264 \"./glance/metadef_namespace_resource_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 265 \"./glance/metadef_objects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 266 \"./glance/metadef_properties.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 267 \"./glance/metadef_tags.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 268 \"./glance/artifacts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 269 \"./glance/artifact_blobs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 270 \"./glance/artifact_dependencies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 271 \"./glance/artifact_properties.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 272 \"./glance/artifact_tags.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 273 \"./glance/artifact_blob_locations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 274 \"./glance/#sql-alter-dc-98.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 255 \"./glance/images.ibd\",\"./glance/#sql-ib287.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 274 \"./glance/#sql-alter-dc-98.ibd\",\"./glance/images.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 255 \"./glance/#sql-ib287.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 275 \"./glance/node_reference.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 276 \"./glance/cached_images.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 277 \"./glance/#sql-alter-dc-98.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 274 \"./glance/images.ibd\",\"./glance/#sql-ib290.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 277 \"./glance/#sql-alter-dc-98.ibd\",\"./glance/images.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 274 \"./glance/#sql-ib290.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 273 \"./glance/artifact_blob_locations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 271 \"./glance/artifact_properties.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 269 \"./glance/artifact_blobs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 270 \"./glance/artifact_dependencies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 272 \"./glance/artifact_tags.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 268 \"./glance/artifacts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 278 \"./nova/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 279 \"./nova/instances.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 280 \"./nova/agent_builds.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 281 \"./nova/aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 282 \"./nova/aggregate_hosts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 283 \"./nova/aggregate_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 284 \"./nova/allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 285 \"./nova/block_device_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 286 \"./nova/bw_usage_cache.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 287 \"./nova/cells.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 288 \"./nova/certificates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 289 \"./nova/compute_nodes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 290 \"./nova/console_auth_tokens.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 291 \"./nova/console_pools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 292 \"./nova/consoles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 293 \"./nova/dns_domains.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 294 \"./nova/fixed_ips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 295 \"./nova/floating_ips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 296 \"./nova/instance_faults.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 297 \"./nova/instance_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 298 \"./nova/instance_info_caches.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 299 \"./nova/instance_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 300 \"./nova/instance_group_policy.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 301 \"./nova/instance_group_member.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 302 \"./nova/instance_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 303 \"./nova/instance_system_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 304 \"./nova/instance_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 305 \"./nova/instance_type_extra_specs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 306 \"./nova/instance_type_projects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 307 \"./nova/instance_actions.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 308 \"./nova/instance_actions_events.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 309 \"./nova/instance_extra.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 310 \"./nova/inventories.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 311 \"./nova/key_pairs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 312 \"./nova/migrations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 313 \"./nova/networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 314 \"./nova/pci_devices.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 315 \"./nova/provider_fw_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 316 \"./nova/quota_classes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 317 \"./nova/quota_usages.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 318 \"./nova/quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 319 \"./nova/project_user_quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 320 \"./nova/reservations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 321 \"./nova/resource_providers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 322 \"./nova/resource_provider_aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 323 \"./nova/s3_images.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 324 \"./nova/security_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 325 \"./nova/security_group_instance_association.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 326 \"./nova/security_group_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 327 \"./nova/security_group_default_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 328 \"./nova/services.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 329 \"./nova/snapshot_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 330 \"./nova/snapshots.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 331 \"./nova/tags.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 332 \"./nova/task_log.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 333 \"./nova/virtual_interfaces.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 334 \"./nova/volume_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 335 \"./nova/volume_usage_cache.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 336 \"./nova/#sql-alter-dc-23f.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 321 \"./nova/resource_providers.ibd\",\"./nova/#sql-backup-dc-23f.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 336 \"./nova/#sql-alter-dc-23f.ibd\",\"./nova/resource_providers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 321 \"./nova/#sql-backup-dc-23f.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 337 \"./nova/shadow_agent_builds.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 338 \"./nova/shadow_aggregate_hosts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 339 \"./nova/shadow_aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 340 \"./nova/shadow_aggregate_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 341 \"./nova/shadow_alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 342 \"./nova/shadow_block_device_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 343 \"./nova/shadow_instances.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 344 \"./nova/shadow_bw_usage_cache.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 345 \"./nova/shadow_cells.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 346 \"./nova/shadow_certificates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 347 \"./nova/shadow_compute_nodes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 348 \"./nova/shadow_console_pools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 349 \"./nova/shadow_consoles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 350 \"./nova/shadow_dns_domains.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 351 \"./nova/shadow_fixed_ips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 352 \"./nova/shadow_floating_ips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 353 \"./nova/shadow_instance_actions.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 354 \"./nova/shadow_instance_actions_events.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 355 \"./nova/shadow_instance_extra.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 356 \"./nova/shadow_instance_faults.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 357 \"./nova/shadow_instance_group_member.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 358 \"./nova/shadow_instance_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 359 \"./nova/shadow_instance_group_policy.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 360 \"./nova/shadow_instance_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 361 \"./nova/shadow_instance_info_caches.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 362 \"./nova/shadow_instance_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 363 \"./nova/shadow_instance_system_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 364 \"./nova/shadow_instance_type_extra_specs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 365 \"./nova/shadow_instance_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 366 \"./nova/shadow_instance_type_projects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 367 \"./nova/shadow_key_pairs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 368 \"./nova/shadow_migrations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 369 \"./nova/shadow_networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 370 \"./nova/shadow_pci_devices.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 371 \"./nova/shadow_project_user_quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 372 \"./nova/shadow_provider_fw_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 373 \"./nova/shadow_quota_classes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 374 \"./nova/shadow_quota_usages.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 375 \"./nova/shadow_quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 376 \"./nova/shadow_reservations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 377 \"./nova/shadow_s3_images.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 378 \"./nova/shadow_security_group_default_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 379 \"./nova/shadow_security_group_instance_association.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 380 \"./nova/shadow_security_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 381 \"./nova/shadow_security_group_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 382 \"./nova/shadow_services.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 383 \"./nova/shadow_snapshot_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 384 \"./nova/shadow_snapshots.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 385 \"./nova/shadow_task_log.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 386 \"./nova/shadow_virtual_interfaces.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 387 \"./nova/shadow_volume_id_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 388 \"./nova/shadow_volume_usage_cache.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 389 \"./nova/share_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 390 \"./barbican/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 391 \"./barbican/projects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 392 \"./barbican/secret_stores.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 393 \"./barbican/transport_keys.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 394 \"./barbican/certificate_authorities.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 395 \"./barbican/containers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 396 \"./barbican/kek_data.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 397 \"./barbican/project_quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 398 \"./barbican/project_secret_store.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 399 \"./barbican/secrets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 400 \"./barbican/certificate_authority_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 401 \"./barbican/container_acls.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 402 \"./barbican/container_consumer_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 403 \"./barbican/container_secret.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 404 \"./barbican/encrypted_data.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 405 \"./barbican/orders.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 406 \"./barbican/preferred_certificate_authorities.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 407 \"./barbican/project_certificate_authorities.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 408 \"./barbican/secret_acls.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 409 \"./barbican/secret_store_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 410 \"./barbican/secret_user_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 411 \"./barbican/container_acl_users.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 412 \"./barbican/order_barbican_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 413 \"./barbican/order_plugin_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 414 \"./barbican/order_retry_tasks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 415 \"./barbican/secret_acl_users.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 416 \"./barbican/secret_consumer_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 417 \"./barbican/#sql-alter-dc-345.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 416 \"./barbican/secret_consumer_metadata.ibd\",\"./barbican/#sql-ib430.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 417 \"./barbican/#sql-alter-dc-345.ibd\",\"./barbican/secret_consumer_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 416 \"./barbican/#sql-ib430.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 418 \"./barbican/#sql-alter-dc-345.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 417 \"./barbican/secret_consumer_metadata.ibd\",\"./barbican/#sql-backup-dc-345.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 418 \"./barbican/#sql-alter-dc-345.ibd\",\"./barbican/secret_consumer_metadata.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 417 \"./barbican/#sql-backup-dc-345.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 419 \"./designate/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 420 \"./designate/pools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 421 \"./designate/pool_ns_records.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 422 \"./designate/pool_attributes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 423 \"./designate/domains.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 424 \"./designate/domain_attributes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 425 \"./designate/recordsets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 426 \"./designate/records.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 427 \"./designate/quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 428 \"./designate/tsigkeys.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 429 \"./designate/tlds.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 430 \"./designate/zone_transfer_requests.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 431 \"./designate/zone_transfer_accepts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 432 \"./designate/zone_tasks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 433 \"./designate/blacklists.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 423 \"./designate/domains.ibd\",\"./designate/zones.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 434 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 424 \"./designate/domain_attributes.ibd\",\"./designate/#sql-ib447.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 434 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/domain_attributes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 424 \"./designate/#sql-ib447.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 434 \"./designate/domain_attributes.ibd\",\"./designate/zone_attributes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 435 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 425 \"./designate/recordsets.ibd\",\"./designate/#sql-ib448.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 435 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/recordsets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 425 \"./designate/#sql-ib448.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 436 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 435 \"./designate/recordsets.ibd\",\"./designate/#sql-ib449.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 436 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/recordsets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 435 \"./designate/#sql-ib449.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 437 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 426 \"./designate/records.ibd\",\"./designate/#sql-ib450.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 437 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/records.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 426 \"./designate/#sql-ib450.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 438 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 437 \"./designate/records.ibd\",\"./designate/#sql-ib451.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 438 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/records.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 437 \"./designate/#sql-ib451.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 439 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 430 \"./designate/zone_transfer_requests.ibd\",\"./designate/#sql-ib452.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 439 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_transfer_requests.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 430 \"./designate/#sql-ib452.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 440 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 431 \"./designate/zone_transfer_accepts.ibd\",\"./designate/#sql-ib453.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 440 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_transfer_accepts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 431 \"./designate/#sql-ib453.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 441 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 423 \"./designate/zones.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 441 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zones.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 423 \"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 442 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 434 \"./designate/zone_attributes.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 442 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_attributes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 434 \"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 443 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 436 \"./designate/recordsets.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 443 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/recordsets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 436 \"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 444 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 438 \"./designate/records.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 444 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/records.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 438 \"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 445 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 439 \"./designate/zone_transfer_requests.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 445 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_transfer_requests.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 439 \"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 446 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 440 \"./designate/zone_transfer_accepts.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 446 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_transfer_accepts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 440 \"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 447 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 432 \"./designate/zone_tasks.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 447 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_tasks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 432 \"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 448 \"./designate/zone_masters.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 449 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 442 \"./designate/zone_attributes.ibd\",\"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 449 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zone_attributes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 442 \"./designate/#sql-backup-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 450 \"./designate/pool_nameservers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 451 \"./designate/pool_targets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 452 \"./designate/pool_target_masters.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 453 \"./designate/pool_target_options.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 454 \"./designate/pool_also_notifies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 455 \"./designate/service_statuses.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 456 \"./designate/shared_zones.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 457 \"./designate/#sql-alter-dc-3d3.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 441 \"./designate/zones.ibd\",\"./designate/#sql-ib470.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 457 \"./designate/#sql-alter-dc-3d3.ibd\",\"./designate/zones.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 441 \"./designate/#sql-ib470.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 458 \"./neutron/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 459 \"./neutron/agents.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 460 \"./neutron/networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 461 \"./neutron/ports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 462 \"./neutron/subnets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 463 \"./neutron/dnsnameservers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 464 \"./neutron/ipallocationpools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 465 \"./neutron/subnetroutes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 466 \"./neutron/ipallocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 467 \"./neutron/ipavailabilityranges.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 468 \"./neutron/networkdhcpagentbindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 469 \"./neutron/externalnetworks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 470 \"./neutron/routers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 471 \"./neutron/floatingips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 472 \"./neutron/routerroutes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 473 \"./neutron/routerl3agentbindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 474 \"./neutron/router_extra_attributes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 475 \"./neutron/ha_router_agent_port_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 476 \"./neutron/ha_router_networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 477 \"./neutron/ha_router_vrid_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 478 \"./neutron/routerports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 479 \"./neutron/securitygroups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 480 \"./neutron/securitygrouprules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 481 \"./neutron/securitygroupportbindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 482 \"./neutron/default_security_group.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 483 \"./neutron/networksecuritybindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 484 \"./neutron/portsecuritybindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 485 \"./neutron/providerresourceassociations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 486 \"./neutron/quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 487 \"./neutron/allowedaddresspairs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 488 \"./neutron/portbindingports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 489 \"./neutron/extradhcpopts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 490 \"./neutron/subnetpools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 491 \"./neutron/subnetpoolprefixes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 492 \"./neutron/network_states.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 493 \"./neutron/network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 494 \"./neutron/ovs_tunnel_endpoints.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 495 \"./neutron/ovs_tunnel_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 496 \"./neutron/ovs_vlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 497 \"./neutron/ovs_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 498 \"./neutron/ml2_vlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 499 \"./neutron/ml2_vxlan_endpoints.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 500 \"./neutron/ml2_gre_endpoints.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 501 \"./neutron/ml2_vxlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 502 \"./neutron/ml2_gre_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 503 \"./neutron/ml2_flat_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 504 \"./neutron/ml2_network_segments.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 505 \"./neutron/ml2_port_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 506 \"./neutron/ml2_port_binding_levels.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 507 \"./neutron/cisco_ml2_nexusport_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 508 \"./neutron/arista_provisioned_nets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 509 \"./neutron/arista_provisioned_vms.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 510 \"./neutron/arista_provisioned_tenants.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 511 \"./neutron/ml2_nexus_vxlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 512 \"./neutron/ml2_nexus_vxlan_mcast_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 513 \"./neutron/cisco_ml2_nexus_nve.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 514 \"./neutron/dvr_host_macs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 515 \"./neutron/ml2_dvr_port_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 516 \"./neutron/csnat_l3_agent_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 517 \"./neutron/firewall_policies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 518 \"./neutron/firewalls.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 519 \"./neutron/firewall_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 520 \"./neutron/healthmonitors.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 521 \"./neutron/vips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 522 \"./neutron/pools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 523 \"./neutron/sessionpersistences.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 524 \"./neutron/poolloadbalanceragentbindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 525 \"./neutron/members.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 526 \"./neutron/poolmonitorassociations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 527 \"./neutron/poolstatisticss.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 528 \"./neutron/embrane_pool_port.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 529 \"./neutron/ipsecpolicies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 530 \"./neutron/ikepolicies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 531 \"./neutron/vpnservices.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 532 \"./neutron/ipsec_site_connections.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 533 \"./neutron/ipsecpeercidrs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 534 \"./neutron/meteringlabels.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 535 \"./neutron/meteringlabelrules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 536 \"./neutron/brocadenetworks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 537 \"./neutron/brocadeports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 538 \"./neutron/ml2_brocadenetworks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 539 \"./neutron/ml2_brocadeports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 540 \"./neutron/cisco_policy_profiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 541 \"./neutron/cisco_network_profiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 542 \"./neutron/cisco_n1kv_vxlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 543 \"./neutron/cisco_n1kv_vlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 544 \"./neutron/cisco_credentials.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 545 \"./neutron/cisco_qos_policies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 546 \"./neutron/cisco_n1kv_profile_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 547 \"./neutron/cisco_n1kv_vmnetworks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 548 \"./neutron/cisco_n1kv_trunk_segments.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 549 \"./neutron/cisco_provider_networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 550 \"./neutron/cisco_n1kv_multi_segments.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 551 \"./neutron/cisco_n1kv_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 552 \"./neutron/cisco_n1kv_port_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 553 \"./neutron/cisco_csr_identifier_map.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 554 \"./neutron/cisco_ml2_apic_host_links.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 555 \"./neutron/cisco_ml2_apic_names.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 556 \"./neutron/cisco_ml2_apic_contracts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 557 \"./neutron/cisco_hosting_devices.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 558 \"./neutron/cisco_port_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 559 \"./neutron/cisco_router_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 560 \"./neutron/cisco_ml2_n1kv_policy_profiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 561 \"./neutron/cisco_ml2_n1kv_network_profiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 562 \"./neutron/cisco_ml2_n1kv_port_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 563 \"./neutron/cisco_ml2_n1kv_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 564 \"./neutron/cisco_ml2_n1kv_vxlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 565 \"./neutron/cisco_ml2_n1kv_vlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 566 \"./neutron/cisco_ml2_n1kv_profile_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 567 \"./neutron/ml2_ucsm_port_profiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 568 \"./neutron/ofcportmappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 569 \"./neutron/ofcroutermappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 570 \"./neutron/routerproviders.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 571 \"./neutron/ofctenantmappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 572 \"./neutron/ofcfiltermappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 573 \"./neutron/ofcnetworkmappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 574 \"./neutron/packetfilters.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 575 \"./neutron/portinfos.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 576 \"./neutron/networkflavors.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 577 \"./neutron/routerflavors.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 578 \"./neutron/routerrules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 579 \"./neutron/nexthops.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 580 \"./neutron/consistencyhashes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 581 \"./neutron/tz_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 582 \"./neutron/multi_provider_networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 583 \"./neutron/vcns_router_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 584 \"./neutron/networkgateways.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 585 \"./neutron/networkconnections.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 586 \"./neutron/qosqueues.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 587 \"./neutron/networkqueuemappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 588 \"./neutron/portqueuemappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 589 \"./neutron/maclearningstates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 590 \"./neutron/neutron_nsx_port_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 591 \"./neutron/lsn.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 592 \"./neutron/lsn_port.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 593 \"./neutron/neutron_nsx_network_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 594 \"./neutron/neutron_nsx_router_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 595 \"./neutron/neutron_nsx_security_group_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 596 \"./neutron/networkgatewaydevicereferences.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 597 \"./neutron/networkgatewaydevices.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 598 \"./neutron/nuage_net_partitions.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 599 \"./neutron/nuage_subnet_l2dom_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 600 \"./neutron/nuage_net_partition_router_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 601 \"./neutron/nuage_provider_net_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 602 \"./neutron/nsxv_router_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 603 \"./neutron/nsxv_internal_networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 604 \"./neutron/nsxv_internal_edges.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 605 \"./neutron/nsxv_firewall_rule_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 606 \"./neutron/nsxv_edge_dhcp_static_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 607 \"./neutron/nsxv_edge_vnic_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 608 \"./neutron/nsxv_spoofguard_policy_network_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 609 \"./neutron/nsxv_security_group_section_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 610 \"./neutron/nsxv_tz_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 611 \"./neutron/nsxv_port_vnic_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 612 \"./neutron/nsxv_port_index_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 613 \"./neutron/nsxv_rule_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 614 \"./neutron/nsxv_router_ext_attributes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 615 \"./neutron/nsxv_vdr_dhcp_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 616 \"./neutron/ipamsubnets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 617 \"./neutron/ipamallocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 618 \"./neutron/ipamallocationpools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 619 \"./neutron/ipamavailabilityranges.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 620 \"./neutron/address_scopes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 621 \"./neutron/flavors.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 622 \"./neutron/serviceprofiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 623 \"./neutron/flavorserviceprofilebindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 624 \"./neutron/networkrbacs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 625 \"./neutron/quotausages.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 626 \"./neutron/qos_policies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 627 \"./neutron/qos_network_policy_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 628 \"./neutron/qos_port_policy_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 629 \"./neutron/qos_bandwidth_limit_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 630 \"./neutron/reservations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 631 \"./neutron/resourcedeltas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 632 \"./neutron/standardattributes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 633 \"./neutron/networkdnsdomains.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 634 \"./neutron/floatingipdnses.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 635 \"./neutron/portdnses.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 636 \"./neutron/auto_allocated_topologies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 637 \"./neutron/bgp_speakers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 638 \"./neutron/bgp_peers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 639 \"./neutron/bgp_speaker_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 640 \"./neutron/bgp_speaker_peer_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 641 \"./neutron/bgp_speaker_dragent_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 642 \"./neutron/qospolicyrbacs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 643 \"./neutron/tags.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 644 \"./neutron/qos_dscp_marking_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 645 \"./neutron/trunks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 646 \"./neutron/subports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 647 \"./neutron/provisioningblocks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 497 \"./neutron/ovs_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 496 \"./neutron/ovs_vlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 493 \"./neutron/network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 495 \"./neutron/ovs_tunnel_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 492 \"./neutron/network_states.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 494 \"./neutron/ovs_tunnel_endpoints.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 576 \"./neutron/networkflavors.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 577 \"./neutron/routerflavors.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 648 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 623 \"./neutron/flavorserviceprofilebindings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 648 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/flavorserviceprofilebindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 623 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 649 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 648 \"./neutron/flavorserviceprofilebindings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 649 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/flavorserviceprofilebindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 648 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 650 \"./neutron/ml2_geneve_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 651 \"./neutron/ml2_geneve_endpoints.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 552 \"./neutron/cisco_n1kv_port_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 551 \"./neutron/cisco_n1kv_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 550 \"./neutron/cisco_n1kv_multi_segments.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 549 \"./neutron/cisco_provider_networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 548 \"./neutron/cisco_n1kv_trunk_segments.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 547 \"./neutron/cisco_n1kv_vmnetworks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 546 \"./neutron/cisco_n1kv_profile_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 545 \"./neutron/cisco_qos_policies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 544 \"./neutron/cisco_credentials.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 543 \"./neutron/cisco_n1kv_vlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 542 \"./neutron/cisco_n1kv_vxlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 541 \"./neutron/cisco_network_profiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 540 \"./neutron/cisco_policy_profiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 528 \"./neutron/embrane_pool_port.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 652 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 461 \"./neutron/ports.ibd\",\"./neutron/#sql-ib665.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 652 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/ports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 461 \"./neutron/#sql-ib665.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 653 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 652 \"./neutron/ports.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 653 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/ports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 652 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 654 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 460 \"./neutron/networks.ibd\",\"./neutron/#sql-ib667.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 654 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 460 \"./neutron/#sql-ib667.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 655 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 654 \"./neutron/networks.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 655 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 654 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 656 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 462 \"./neutron/subnets.ibd\",\"./neutron/#sql-ib669.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 656 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 462 \"./neutron/#sql-ib669.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 657 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 656 \"./neutron/subnets.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 657 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 656 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 658 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 490 \"./neutron/subnetpools.ibd\",\"./neutron/#sql-ib671.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 658 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnetpools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 490 \"./neutron/#sql-ib671.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 659 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 658 \"./neutron/subnetpools.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 659 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnetpools.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 658 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 660 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 479 \"./neutron/securitygroups.ibd\",\"./neutron/#sql-ib673.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 660 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygroups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 479 \"./neutron/#sql-ib673.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 661 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 660 \"./neutron/securitygroups.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 661 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygroups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 660 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 662 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 471 \"./neutron/floatingips.ibd\",\"./neutron/#sql-ib675.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 662 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/floatingips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 471 \"./neutron/#sql-ib675.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 663 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 662 \"./neutron/floatingips.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 663 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/floatingips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 662 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 664 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 470 \"./neutron/routers.ibd\",\"./neutron/#sql-ib677.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 664 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/routers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 470 \"./neutron/#sql-ib677.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 665 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 664 \"./neutron/routers.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 665 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/routers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 664 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 666 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 480 \"./neutron/securitygrouprules.ibd\",\"./neutron/#sql-ib679.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 666 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygrouprules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 480 \"./neutron/#sql-ib679.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 667 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 666 \"./neutron/securitygrouprules.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 667 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygrouprules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 666 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 516 \"./neutron/csnat_l3_agent_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 573 \"./neutron/ofcnetworkmappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 568 \"./neutron/ofcportmappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 569 \"./neutron/ofcroutermappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 572 \"./neutron/ofcfiltermappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 571 \"./neutron/ofctenantmappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 575 \"./neutron/portinfos.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 570 \"./neutron/routerproviders.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 574 \"./neutron/packetfilters.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 504 \"./neutron/ml2_network_segments.ibd\",\"./neutron/networksegments.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 668 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 506 \"./neutron/ml2_port_binding_levels.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 668 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/ml2_port_binding_levels.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 506 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 669 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 657 \"./neutron/subnets.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 669 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 657 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 670 \"./neutron/segmenthostmappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 515 \"./neutron/ml2_dvr_port_bindings.ibd\",\"./neutron/ml2_distributed_port_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 671 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 665 \"./neutron/routers.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 671 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/routers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 665 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 672 \"./neutron/subnet_service_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 673 \"./neutron/qos_minimum_bandwidth_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 674 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 505 \"./neutron/ml2_port_bindings.ibd\",\"./neutron/#sql-ib687.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 674 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/ml2_port_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 505 \"./neutron/#sql-ib687.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 675 \"./neutron/portdataplanestatuses.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 676 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 629 \"./neutron/qos_bandwidth_limit_rules.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 676 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/qos_bandwidth_limit_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 629 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 677 \"./neutron/qos_policies_default.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 678 \"./neutron/logs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 679 \"./neutron/qos_fip_policy_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 680 \"./neutron/portforwardings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 681 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 680 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 681 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 680 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 682 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 681 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 682 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 681 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 683 \"./neutron/portuplinkstatuspropagation.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 684 \"./neutron/qos_router_gw_policy_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 685 \"./neutron/network_segment_ranges.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 686 \"./neutron/securitygrouprbacs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 687 \"./neutron/conntrack_helpers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 688 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 669 \"./neutron/subnets.ibd\",\"./neutron/#sql-ib701.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 688 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/subnets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 669 \"./neutron/#sql-ib701.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 689 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 655 \"./neutron/networks.ibd\",\"./neutron/#sql-ib702.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 689 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 655 \"./neutron/#sql-ib702.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 690 \"./neutron/ovn_revision_numbers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 691 \"./neutron/ovn_hash_ring.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 692 \"./neutron/network_subnet_lock.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 693 \"./neutron/subnet_dns_publish_fixed_ips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 694 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 682 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-ib707.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 694 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 682 \"./neutron/#sql-ib707.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 695 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 694 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 695 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 694 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 696 \"./neutron/dvr_fip_gateway_port_network.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 697 \"./neutron/addressscoperbacs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 698 \"./neutron/subnetpoolrbacs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 699 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 498 \"./neutron/ml2_vlan_allocations.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 699 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/ml2_vlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 498 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 700 \"./neutron/address_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 701 \"./neutron/address_associations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 702 \"./neutron/portnumaaffinitypolicies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 703 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 667 \"./neutron/securitygrouprules.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 703 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygrouprules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 667 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 704 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 700 \"./neutron/address_groups.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 704 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/address_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 700 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 705 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 704 \"./neutron/address_groups.ibd\",\"./neutron/#sql-ib718.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 705 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/address_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 704 \"./neutron/#sql-ib718.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 706 \"./neutron/portdeviceprofiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 707 \"./neutron/addressgrouprbacs.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 708 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 643 \"./neutron/tags.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 708 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/tags.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 643 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 709 \"./neutron/qos_packet_rate_limit_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 710 \"./neutron/qos_minimum_packet_rate_rules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 711 \"./neutron/local_ips.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 712 \"./neutron/local_ip_associations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 488 \"./neutron/portbindingports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 713 \"./neutron/router_ndp_proxy_state.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 714 \"./neutron/ndp_proxies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 715 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 695 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 715 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 695 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 716 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 715 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 716 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 715 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 717 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 716 \"./neutron/portforwardings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 717 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/portforwardings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 716 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 718 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 481 \"./neutron/securitygroupportbindings.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 718 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/securitygroupportbindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 481 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 719 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 504 \"./neutron/networksegments.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 719 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networksegments.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 504 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 720 \"./neutron/porthints.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 721 \"./neutron/securitygroupdefaultrules.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 556 \"./neutron/cisco_ml2_apic_contracts.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 555 \"./neutron/cisco_ml2_apic_names.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 554 \"./neutron/cisco_ml2_apic_host_links.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 560 \"./neutron/cisco_ml2_n1kv_policy_profiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 562 \"./neutron/cisco_ml2_n1kv_port_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 563 \"./neutron/cisco_ml2_n1kv_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 564 \"./neutron/cisco_ml2_n1kv_vxlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 565 \"./neutron/cisco_ml2_n1kv_vlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 566 \"./neutron/cisco_ml2_n1kv_profile_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 507 \"./neutron/cisco_ml2_nexusport_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 513 \"./neutron/cisco_ml2_nexus_nve.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 512 \"./neutron/ml2_nexus_vxlan_mcast_groups.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 567 \"./neutron/ml2_ucsm_port_profiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 558 \"./neutron/cisco_port_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 559 \"./neutron/cisco_router_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 557 \"./neutron/cisco_hosting_devices.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 561 \"./neutron/cisco_ml2_n1kv_network_profiles.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 511 \"./neutron/ml2_nexus_vxlan_allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 581 \"./neutron/tz_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 593 \"./neutron/neutron_nsx_network_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 595 \"./neutron/neutron_nsx_security_group_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 590 \"./neutron/neutron_nsx_port_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 594 \"./neutron/neutron_nsx_router_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 582 \"./neutron/multi_provider_networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 585 \"./neutron/networkconnections.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 596 \"./neutron/networkgatewaydevicereferences.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 597 \"./neutron/networkgatewaydevices.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 584 \"./neutron/networkgateways.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 589 \"./neutron/maclearningstates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 588 \"./neutron/portqueuemappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 587 \"./neutron/networkqueuemappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 586 \"./neutron/qosqueues.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 592 \"./neutron/lsn_port.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 591 \"./neutron/lsn.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 602 \"./neutron/nsxv_router_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 607 \"./neutron/nsxv_edge_vnic_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 606 \"./neutron/nsxv_edge_dhcp_static_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 603 \"./neutron/nsxv_internal_networks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 604 \"./neutron/nsxv_internal_edges.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 609 \"./neutron/nsxv_security_group_section_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 613 \"./neutron/nsxv_rule_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 611 \"./neutron/nsxv_port_vnic_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 614 \"./neutron/nsxv_router_ext_attributes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 610 \"./neutron/nsxv_tz_network_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 612 \"./neutron/nsxv_port_index_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 605 \"./neutron/nsxv_firewall_rule_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 608 \"./neutron/nsxv_spoofguard_policy_network_mappings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 615 \"./neutron/nsxv_vdr_dhcp_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 583 \"./neutron/vcns_router_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 537 \"./neutron/brocadeports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 536 \"./neutron/brocadenetworks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 539 \"./neutron/ml2_brocadeports.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 538 \"./neutron/ml2_brocadenetworks.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 600 \"./neutron/nuage_net_partition_router_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 601 \"./neutron/nuage_provider_net_bindings.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 599 \"./neutron/nuage_subnet_l2dom_mapping.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 598 \"./neutron/nuage_net_partitions.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 553 \"./neutron/cisco_csr_identifier_map.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 722 \"./neutron/porthardwareoffloadtype.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 723 \"./neutron/porttrusted.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 724 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 626 \"./neutron/qos_policies.ibd\",\"./neutron/#sql-ib737.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 724 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/qos_policies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 626 \"./neutron/#sql-ib737.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 725 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 724 \"./neutron/qos_policies.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 725 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/qos_policies.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 724 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 726 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 719 \"./neutron/networksegments.ibd\",\"./neutron/#sql-ib739.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 726 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networksegments.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 719 \"./neutron/#sql-ib739.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 727 \"./neutron/#sql-alter-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 726 \"./neutron/networksegments.ibd\",\"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 727 \"./neutron/#sql-alter-dc-419.ibd\",\"./neutron/networksegments.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 726 \"./neutron/#sql-backup-dc-419.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 467 \"./neutron/ipavailabilityranges.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 619 \"./neutron/ipamavailabilityranges.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 728 \"./placement/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 729 \"./placement/allocations.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 730 \"./placement/consumers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 731 \"./placement/inventories.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 732 \"./placement/placement_aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 733 \"./placement/projects.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 734 \"./placement/resource_classes.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 735 \"./placement/resource_provider_aggregates.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 736 \"./placement/resource_providers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 737 \"./placement/traits.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 738 \"./placement/users.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 739 \"./placement/resource_provider_traits.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 740 \"./placement/consumer_types.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 741 \"./placement/#sql-alter-dc-47a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 730 \"./placement/consumers.ibd\",\"./placement/#sql-backup-dc-47a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 741 \"./placement/#sql-alter-dc-47a.ibd\",\"./placement/consumers.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 730 \"./placement/#sql-backup-dc-47a.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 742 \"./magnum/alembic_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 743 \"./magnum/bay.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 744 \"./magnum/baymodel.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 745 \"./magnum/container.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 746 \"./magnum/node.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 747 \"./magnum/pod.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 748 \"./magnum/service.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 749 \"./magnum/replicationcontroller.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 750 \"./magnum/baylock.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 751 \"./magnum/#sql-alter-dc-511.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 748 \"./magnum/service.ibd\",\"./magnum/#sql-backup-dc-511.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 751 \"./magnum/#sql-alter-dc-511.ibd\",\"./magnum/service.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 748 \"./magnum/#sql-backup-dc-511.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 752 \"./magnum/x509keypair.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 753 \"./magnum/magnum_service.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 750 \"./magnum/baylock.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 746 \"./magnum/node.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 754 \"./magnum/quotas.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 747 \"./magnum/pod.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 751 \"./magnum/service.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 745 \"./magnum/container.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 749 \"./magnum/replicationcontroller.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 744 \"./magnum/baymodel.ibd\",\"./magnum/cluster_template.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 743 \"./magnum/bay.ibd\",\"./magnum/cluster.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 755 \"./magnum/#sql-alter-dc-511.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 744 \"./magnum/cluster_template.ibd\",\"./magnum/#sql-backup-dc-511.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 755 \"./magnum/#sql-alter-dc-511.ibd\",\"./magnum/cluster_template.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 744 \"./magnum/#sql-backup-dc-511.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 756 \"./magnum/federation.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 757 \"./magnum/nodegroup.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 758 \"./grafana/migration_log.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 759 \"./grafana/user.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 759 \"./grafana/user.ibd\",\"./grafana/user_v1.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 760 \"./grafana/user.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 759 \"./grafana/user_v1.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 761 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 760 \"./grafana/user.ibd\",\"./grafana/#sql-ib774.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 761 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/user.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 760 \"./grafana/#sql-ib774.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 762 \"./grafana/temp_user.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 762 \"./grafana/temp_user.ibd\",\"./grafana/temp_user_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 763 \"./grafana/temp_user.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 762 \"./grafana/temp_user_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 764 \"./grafana/star.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 765 \"./grafana/org.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 766 \"./grafana/org_user.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 767 \"./grafana/dashboard.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 768 \"./grafana/dashboard_tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 767 \"./grafana/dashboard.ibd\",\"./grafana/dashboard_v1.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 769 \"./grafana/dashboard.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 767 \"./grafana/dashboard_v1.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 770 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 769 \"./grafana/dashboard.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 770 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 769 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 771 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 770 \"./grafana/dashboard.ibd\",\"./grafana/#sql-ib784.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 771 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 770 \"./grafana/#sql-ib784.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 772 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 771 \"./grafana/dashboard.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 772 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 771 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 773 \"./grafana/dashboard_provisioning.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 773 \"./grafana/dashboard_provisioning.ibd\",\"./grafana/dashboard_provisioning_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 774 \"./grafana/dashboard_provisioning.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 773 \"./grafana/dashboard_provisioning_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 775 \"./grafana/data_source.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 775 \"./grafana/data_source.ibd\",\"./grafana/data_source_v1.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 776 \"./grafana/data_source.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 775 \"./grafana/data_source_v1.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 777 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 776 \"./grafana/data_source.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 777 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/data_source.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 776 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 778 \"./grafana/api_key.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 778 \"./grafana/api_key.ibd\",\"./grafana/api_key_v1.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 779 \"./grafana/api_key.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 778 \"./grafana/api_key_v1.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 780 \"./grafana/dashboard_snapshot.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 780 \"./grafana/dashboard_snapshot.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 781 \"./grafana/dashboard_snapshot.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 782 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 781 \"./grafana/dashboard_snapshot.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 782 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard_snapshot.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 781 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 783 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 782 \"./grafana/dashboard_snapshot.ibd\",\"./grafana/#sql-ib796.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 783 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard_snapshot.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 782 \"./grafana/#sql-ib796.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 784 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 783 \"./grafana/dashboard_snapshot.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 784 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard_snapshot.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 783 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 785 \"./grafana/quota.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 786 \"./grafana/plugin_setting.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 787 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 786 \"./grafana/plugin_setting.ibd\",\"./grafana/#sql-ib800.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 787 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/plugin_setting.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 786 \"./grafana/#sql-ib800.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 788 \"./grafana/session.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 789 \"./grafana/playlist.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 790 \"./grafana/playlist_item.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 791 \"./grafana/preferences.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 792 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 791 \"./grafana/preferences.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 792 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/preferences.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 791 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 793 \"./grafana/alert.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 794 \"./grafana/alert_rule_tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 795 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 794 \"./grafana/alert_rule_tag.ibd\",\"./grafana/#sql-ib808.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 795 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_rule_tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 794 \"./grafana/#sql-ib808.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 796 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 795 \"./grafana/alert_rule_tag.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 796 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_rule_tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 795 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 796 \"./grafana/alert_rule_tag.ibd\",\"./grafana/alert_rule_tag_v1.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 797 \"./grafana/alert_rule_tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 796 \"./grafana/alert_rule_tag_v1.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 798 \"./grafana/alert_notification.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 799 \"./grafana/alert_notification_journal.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 799 \"./grafana/alert_notification_journal.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 800 \"./grafana/alert_notification_state.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 801 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 793 \"./grafana/alert.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 801 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 793 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 802 \"./grafana/annotation.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 803 \"./grafana/annotation_tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 804 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 803 \"./grafana/annotation_tag.ibd\",\"./grafana/#sql-ib817.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 804 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/annotation_tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 803 \"./grafana/#sql-ib817.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 805 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 804 \"./grafana/annotation_tag.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 805 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/annotation_tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 804 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 805 \"./grafana/annotation_tag.ibd\",\"./grafana/annotation_tag_v2.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 806 \"./grafana/annotation_tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 805 \"./grafana/annotation_tag_v2.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 807 \"./grafana/test_data.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 808 \"./grafana/dashboard_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 809 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 808 \"./grafana/dashboard_version.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 809 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/dashboard_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 808 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 810 \"./grafana/team.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 811 \"./grafana/team_member.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 812 \"./grafana/dashboard_acl.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 813 \"./grafana/tag.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 814 \"./grafana/login_attempt.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 814 \"./grafana/login_attempt.ibd\",\"./grafana/login_attempt_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 815 \"./grafana/login_attempt.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 814 \"./grafana/login_attempt_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 816 \"./grafana/user_auth.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 817 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 816 \"./grafana/user_auth.ibd\",\"./grafana/#sql-ib830.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 817 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/user_auth.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 816 \"./grafana/#sql-ib830.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 818 \"./grafana/server_lock.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 819 \"./grafana/user_auth_token.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 820 \"./grafana/cache_data.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 821 \"./grafana/short_url.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 822 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 821 \"./grafana/short_url.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 822 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/short_url.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 821 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 823 \"./grafana/alert_definition.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 824 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 823 \"./grafana/alert_definition.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 824 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_definition.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 823 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 824 \"./grafana/alert_definition.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 825 \"./grafana/alert_definition_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 826 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 825 \"./grafana/alert_definition_version.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 826 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_definition_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 825 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 826 \"./grafana/alert_definition_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 827 \"./grafana/alert_instance.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 828 \"./grafana/alert_rule.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 829 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 828 \"./grafana/alert_rule.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 829 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_rule.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 828 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 830 \"./grafana/alert_rule_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 831 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 830 \"./grafana/alert_rule_version.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 831 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_rule_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 830 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 832 \"./grafana/alert_configuration.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 833 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 832 \"./grafana/alert_configuration.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 833 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/alert_configuration.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 832 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 834 \"./grafana/ngalert_configuration.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 835 \"./grafana/provenance_type.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 836 \"./grafana/alert_image.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 837 \"./grafana/alert_configuration_history.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 838 \"./grafana/library_element.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 839 \"./grafana/library_element_connection.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 840 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 838 \"./grafana/library_element.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 840 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/library_element.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 838 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 841 \"./grafana/data_keys.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 842 \"./grafana/secrets.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 843 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 841 \"./grafana/data_keys.ibd\",\"./grafana/#sql-ib856.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 843 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/data_keys.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 841 \"./grafana/#sql-ib856.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 844 \"./grafana/kv_store.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 845 \"./grafana/permission.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 846 \"./grafana/role.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 847 \"./grafana/team_role.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 848 \"./grafana/user_role.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 849 \"./grafana/builtin_role.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 850 \"./grafana/seed_assignment.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 851 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 850 \"./grafana/seed_assignment.ibd\",\"./grafana/#sql-ib864.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 851 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/seed_assignment.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 850 \"./grafana/#sql-ib864.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 852 \"./grafana/query_history.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 853 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 852 \"./grafana/query_history.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 853 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/query_history.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 852 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 854 \"./grafana/query_history_details.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 855 \"./grafana/query_history_star.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 856 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 855 \"./grafana/query_history_star.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 856 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/query_history_star.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 855 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 857 \"./grafana/correlation.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 857 \"./grafana/correlation.ibd\",\"./grafana/correlation_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 858 \"./grafana/correlation.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 857 \"./grafana/correlation_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 859 \"./grafana/entity_event.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 860 \"./grafana/dashboard_public_config.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 860 \"./grafana/dashboard_public_config.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 861 \"./grafana/dashboard_public_config.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 861 \"./grafana/dashboard_public_config.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 862 \"./grafana/dashboard_public_config.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 862 \"./grafana/dashboard_public_config.ibd\",\"./grafana/dashboard_public.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 863 \"./grafana/file.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 864 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 863 \"./grafana/file.ibd\",\"./grafana/#sql-ib877.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 864 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/file.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 863 \"./grafana/#sql-ib877.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 865 \"./grafana/file_meta.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 866 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 865 \"./grafana/file_meta.ibd\",\"./grafana/#sql-ib879.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 866 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/file_meta.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 865 \"./grafana/#sql-ib879.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 867 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 864 \"./grafana/file.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 867 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/file.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 864 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 868 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 851 \"./grafana/seed_assignment.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 868 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/seed_assignment.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 851 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 869 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 868 \"./grafana/seed_assignment.ibd\",\"./grafana/#sql-ib882.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 869 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/seed_assignment.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 868 \"./grafana/#sql-ib882.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 870 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 869 \"./grafana/seed_assignment.ibd\",\"./grafana/#sql-ib883.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 870 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/seed_assignment.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 869 \"./grafana/#sql-ib883.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 871 \"./grafana/folder.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 872 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 871 \"./grafana/folder.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 872 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/folder.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 871 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 873 \"./grafana/anon_device.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 874 \"./grafana/signing_key.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 875 \"./grafana/sso_setting.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 876 \"./grafana/cloud_migration.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 877 \"./grafana/cloud_migration_run.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 876 \"./grafana/cloud_migration.ibd\",\"./grafana/cloud_migration_session_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 878 \"./grafana/cloud_migration_session.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 876 \"./grafana/cloud_migration_session_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 877 \"./grafana/cloud_migration_run.ibd\",\"./grafana/cloud_migration_snapshot_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 879 \"./grafana/cloud_migration_snapshot.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 877 \"./grafana/cloud_migration_snapshot_tmp_qwerty.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 880 \"./grafana/cloud_migration_resource.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 881 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 880 \"./grafana/cloud_migration_resource.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 881 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/cloud_migration_resource.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 880 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 882 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 844 \"./grafana/kv_store.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 882 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/kv_store.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 844 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 883 \"./grafana/user_external_session.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 884 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 883 \"./grafana/user_external_session.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 884 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/user_external_session.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 883 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 885 \"./grafana/#sql-alter-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 884 \"./grafana/user_external_session.ibd\",\"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 885 \"./grafana/#sql-alter-dc-52c.ibd\",\"./grafana/user_external_session.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 884 \"./grafana/#sql-backup-dc-52c.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 886 \"./grafana/alert_rule_state.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 887 \"./grafana/resource_migration_log.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 888 \"./grafana/resource.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 889 \"./grafana/resource_history.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 890 \"./grafana/resource_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 891 \"./grafana/#sql-alter-dc-563.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 890 \"./grafana/resource_version.ibd\",\"./grafana/#sql-ib904.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : rename 891 \"./grafana/#sql-alter-dc-563.ibd\",\"./grafana/resource_version.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : delete 890 \"./grafana/#sql-ib904.ibd\"", "[00] 2025-07-12 20:50:27 DDL tracking : create 892 \"./grafana/resource_blob.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 893 \"./octavia/alembic_version.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 894 \"./octavia/health_monitor_type.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 895 \"./octavia/protocol.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 896 \"./octavia/algorithm.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 897 \"./octavia/session_persistence_type.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 898 \"./octavia/provisioning_status.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 899 \"./octavia/operating_status.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 900 \"./octavia/pool.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 901 \"./octavia/health_monitor.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 902 \"./octavia/session_persistence.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 903 \"./octavia/member.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 904 \"./octavia/load_balancer.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 905 \"./octavia/vip.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 906 \"./octavia/listener.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 907 \"./octavia/sni.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 908 \"./octavia/listener_statistics.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 909 \"./octavia/amphora.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 910 \"./octavia/load_balancer_amphora.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 911 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 909 \"./octavia/amphora.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 911 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/amphora.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 909 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 910 \"./octavia/load_balancer_amphora.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 912 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 911 \"./octavia/amphora.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 912 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/amphora.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 911 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 913 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 903 \"./octavia/member.ibd\",\"./octavia/#sql-ib926.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 913 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/member.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 903 \"./octavia/#sql-ib926.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 914 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 912 \"./octavia/amphora.ibd\",\"./octavia/#sql-ib927.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 914 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/amphora.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 912 \"./octavia/#sql-ib927.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 915 \"./octavia/amphora_health.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 916 \"./octavia/lb_topology.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 917 \"./octavia/amphora_roles.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 918 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 904 \"./octavia/load_balancer.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 918 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/load_balancer.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 904 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 919 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 914 \"./octavia/amphora.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 919 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/amphora.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 914 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 920 \"./octavia/vrrp_auth_method.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 921 \"./octavia/vrrp_group.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 922 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 918 \"./octavia/load_balancer.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 922 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/load_balancer.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 918 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 923 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 906 \"./octavia/listener.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 923 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 906 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 924 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 900 \"./octavia/pool.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 924 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 900 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 925 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 913 \"./octavia/member.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 925 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/member.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 913 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 926 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 924 \"./octavia/pool.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 926 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 924 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 927 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 923 \"./octavia/listener.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 927 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 923 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 928 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 926 \"./octavia/pool.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 928 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 926 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 929 \"./octavia/l7rule_type.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 930 \"./octavia/l7rule_compare_type.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 931 \"./octavia/l7policy_action.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 932 \"./octavia/l7policy.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 933 \"./octavia/l7rule.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 934 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 908 \"./octavia/listener_statistics.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 934 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener_statistics.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 908 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 935 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 934 \"./octavia/listener_statistics.ibd\",\"./octavia/#sql-ib948.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 935 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener_statistics.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 934 \"./octavia/#sql-ib948.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 936 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 935 \"./octavia/listener_statistics.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 936 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener_statistics.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 935 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 937 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 936 \"./octavia/listener_statistics.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 937 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener_statistics.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 936 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 938 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 901 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 938 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 901 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 939 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 932 \"./octavia/l7policy.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 939 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7policy.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 932 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 940 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 933 \"./octavia/l7rule.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 940 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7rule.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 933 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 941 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 925 \"./octavia/member.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 941 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/member.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 925 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 942 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 928 \"./octavia/pool.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 942 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 928 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 943 \"./octavia/quotas.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 944 \"./octavia/amphora_build_slots.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 945 \"./octavia/amphora_build_request.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 946 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 938 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-ib959.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 946 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 938 \"./octavia/#sql-ib959.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 947 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 946 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-ib960.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 947 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 946 \"./octavia/#sql-ib960.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 948 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 947 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 948 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 947 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 949 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 939 \"./octavia/l7policy.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 949 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7policy.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 939 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 950 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 949 \"./octavia/l7policy.ibd\",\"./octavia/#sql-ib963.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 950 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7policy.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 949 \"./octavia/#sql-ib963.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 951 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 948 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-ib964.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 951 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 948 \"./octavia/#sql-ib964.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 952 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 941 \"./octavia/member.ibd\",\"./octavia/#sql-ib965.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 952 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/member.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 941 \"./octavia/#sql-ib965.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 953 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 942 \"./octavia/pool.ibd\",\"./octavia/#sql-ib966.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 953 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 942 \"./octavia/#sql-ib966.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 954 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 940 \"./octavia/l7rule.ibd\",\"./octavia/#sql-ib967.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 954 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7rule.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 940 \"./octavia/#sql-ib967.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 955 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 951 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 955 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 951 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 956 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 950 \"./octavia/l7policy.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 956 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7policy.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 950 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 957 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 954 \"./octavia/l7rule.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 957 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7rule.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 954 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 958 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 952 \"./octavia/member.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 958 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/member.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 952 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 959 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 953 \"./octavia/pool.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 959 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/pool.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 953 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 960 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 957 \"./octavia/l7rule.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 960 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/l7rule.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 957 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 961 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 955 \"./octavia/health_monitor.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 961 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/health_monitor.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 955 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 962 \"./octavia/tags.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 963 \"./octavia/flavor_profile.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 964 \"./octavia/flavor.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 965 \"./octavia/client_authentication_mode.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 966 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 927 \"./octavia/listener.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 966 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/listener.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 927 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 967 \"./octavia/spares_pool.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 968 \"./octavia/listener_cidr.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 969 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 922 \"./octavia/load_balancer.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 969 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/load_balancer.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 922 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 970 \"./octavia/availability_zone_profile.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 971 \"./octavia/availability_zone.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 972 \"./octavia/#sql-alter-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 969 \"./octavia/load_balancer.ibd\",\"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : rename 972 \"./octavia/#sql-alter-dc-674.ibd\",\"./octavia/load_balancer.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : delete 969 \"./octavia/#sql-backup-dc-674.ibd\"", "[00] 2025-07-12 20:50:28 DDL tracking : create 973 \"./octavia/additional_vip.ibd\"", "[00] 2025-07-12 20:50:29 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-07-12 20:50:29 Connecting to MariaDB server host: 192.168.16.11, user: backup_shard_0, password: set, port: 3306, socket: not set", "[00] 2025-07-12 20:50:29 BACKUP STAGE START", "[00] 2025-07-12 20:50:29 Acquiring BACKUP LOCKS...", "[00] 2025-07-12 20:50:29 Streaming /var/lib/mysql//aria_log_control to ", "[00] 2025-07-12 20:50:29 ...done", "[00] 2025-07-12 20:50:29 Loading aria_log_control.", "[00] 2025-07-12 20:50:29 aria_log_control: last_log_number: 1", "[00] 2025-07-12 20:50:29 Start scanning aria tables.", "[00] 2025-07-12 20:50:29 Start scanning aria log files.", "[00] 2025-07-12 20:50:29 Found 1 aria log files, minimum log number 1, maximum log number 1", "[00] 2025-07-12 20:50:29 Stop scanning aria tables.", "[00] 2025-07-12 20:50:29 Streaming ./mysql/wsrep_cluster_members.ibd", "[00] 2025-07-12 20:50:29 ...done", "[00] 2025-07-12 20:50:29 Streaming ./mysql/innodb_index_stats.ibd", "[00] 2025-07-12 20:50:29 ...done", "[00] 2025-07-12 20:50:29 Streaming ./mysql/wsrep_allowlist.ibd", "[00] 2025-07-12 20:50:29 ...done", "[00] 2025-07-12 20:50:29 Streaming ./mysql/gtid_slave_pos.ibd", "[00] 2025-07-12 20:50:29 ...done", "[00] 2025-07-12 20:50:29 Streaming ./mysql/wsrep_streaming_log.ibd", "[00] 2025-07-12 20:50:29 ...done", "[00] 2025-07-12 20:50:29 Streaming ./mysql/transaction_registry.ibd", "[00] 2025-07-12 20:50:29 ...done", "[00] 2025-07-12 20:50:29 Streaming ./mysql/innodb_table_stats.ibd", "[00] 2025-07-12 20:50:29 ...done", "[00] 2025-07-12 20:50:29 Streaming ./mysql/wsrep_cluster.ibd", "[00] 2025-07-12 20:50:29 ...done", "[00] 2025-07-12 20:50:29 Streaming ibdata1", "[00] 2025-07-12 20:50:30 ...done", "[00] 2025-07-12 20:50:30 aria table file ./sys/sys_config.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./sys/sys_config.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/plugin.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/plugin.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/servers.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/servers.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/global_priv.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/global_priv.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_leap_second.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_leap_second.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_transition_type.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_transition_type.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/proc.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/proc.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/event.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/event.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/func.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/func.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/procs_priv.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/procs_priv.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/tables_priv.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/tables_priv.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/columns_priv.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/columns_priv.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_name.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_name.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/roles_mapping.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/roles_mapping.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_transition.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/time_zone_transition.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/db.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/db.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/proxies_priv.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/proxies_priv.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 Start copying aria log file tail: /var/lib/mysql//aria_log.00000001", "[00] 2025-07-12 20:50:30 Stop copying aria log file tail: /var/lib/mysql//aria_log.00000001, copied 425984 bytes", "[00] 2025-07-12 20:50:30 BACKUP STAGE FLUSH", "[00] 2025-07-12 20:50:30 Start scanning common engine tables, need backup locks: 0, collect log and stat tables: 1", "[00] 2025-07-12 20:50:30 Log table found: mysql.slow_log", "[00] 2025-07-12 20:50:30 Collect log table file: ./mysql/slow_log.CSV", "[00] 2025-07-12 20:50:30 Log table found: mysql.general_log", "[00] 2025-07-12 20:50:30 Collect log table file: ./mysql/general_log.CSM", "[00] 2025-07-12 20:50:30 Collect log table file: ./mysql/slow_log.CSM", "[00] 2025-07-12 20:50:30 Collect log table file: ./mysql/general_log.CSV", "[00] 2025-07-12 20:50:30 Stop scanning common engine tables", "[00] 2025-07-12 20:50:30 Start copying aria log file tail: /var/lib/mysql//aria_log.00000001", "[00] 2025-07-12 20:50:30 Stop copying aria log file tail: /var/lib/mysql//aria_log.00000001, copied 0 bytes", "[00] 2025-07-12 20:50:30 aria table file ./mysql/help_topic.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/help_topic.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/help_keyword.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/help_keyword.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/help_category.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/help_category.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/help_relation.MAI is copied successfully.", "[00] 2025-07-12 20:50:30 aria table file ./mysql/help_relation.MAD is copied successfully.", "[00] 2025-07-12 20:50:30 Start scanning common engine tables, need backup locks: 1, collect log and stat tables: 0", "[00] 2025-07-12 20:50:30 Stop scanning common engine tables", "[00] 2025-07-12 20:50:30 Starting to backup non-InnoDB tables and files", "[01] 2025-07-12 20:50:30 Streaming ./barbican/project_certificate_authorities.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/transport_keys.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_user_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_stores.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/order_barbican_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/certificate_authority_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_acl_users.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/container_acl_users.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/project_quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/projects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/order_plugin_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_consumer_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/container_secret.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/kek_data.frm to ", "[00] 2025-07-12 20:50:30 Copied file ./mysql/general_log.CSV for log table `mysql`.`general_log`, 0 bytes", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/container_acls.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/certificate_authorities.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/secrets.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_store_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/order_retry_tasks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/project_secret_store.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/encrypted_data.frm to ", "[00] 2025-07-12 20:50:30 Copied file ./mysql/slow_log.CSV for log table `mysql`.`slow_log`, 0 bytes", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/orders.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/secret_acls.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/preferred_certificate_authorities.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/containers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./barbican/container_consumer_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/sensitive_config.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/assignment.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/system_assignment.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/local_user.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/user_group_membership.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/revocation_event.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/application_credential_role.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/project_tag.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/user.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/request_token.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/endpoint_group.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/group.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/user_option.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/role.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/implied_role.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/application_credential_access_rule.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/nonlocal_user.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/service_provider.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/trust_role.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/identity_provider.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/federation_protocol.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/region.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/policy_association.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/whitelisted_config.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/registered_limit.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/id_mapping.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/access_token.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/endpoint.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/application_credential.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/token.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/project_endpoint.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/federated_user.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/project.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/limit.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/policy.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/project_option.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/project_endpoint_group.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/idp_remote_ids.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/password.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/trust.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/config_register.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/access_rule.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/role_option.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/mapping.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/service.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/credential.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/consumer.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./keystone/expiring_user_group_membership.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./magnum/federation.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./magnum/nodegroup.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./magnum/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./magnum/cluster.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./magnum/x509keypair.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./magnum/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./magnum/cluster_template.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./magnum/quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./magnum/magnum_service.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia_persistence/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/lb_topology.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/session_persistence_type.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/health_monitor.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/health_monitor_type.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/amphora_health.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/provisioning_status.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/l7rule_compare_type.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/l7policy_action.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/flavor_profile.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/l7rule.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/amphora_build_slots.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/tags.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/availability_zone_profile.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/listener_cidr.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/availability_zone.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/session_persistence.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/vrrp_group.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/pool.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/spares_pool.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/load_balancer.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/vrrp_auth_method.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/amphora_roles.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/amphora_build_request.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/algorithm.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/l7rule_type.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/operating_status.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/amphora.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/sni.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/listener_statistics.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/protocol.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/l7policy.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/additional_vip.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/listener.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/flavor.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/client_authentication_mode.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/vip.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./octavia/member.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/host_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/resource_provider_traits.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/users.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/aggregate_hosts.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/traits.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/flavor_projects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/aggregate_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/request_specs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/project_user_quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/projects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/flavor_extra_specs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/key_pairs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/instance_groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/placement_aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/build_requests.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/resource_provider_aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/instance_group_policy.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/inventories.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/allocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/resource_providers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/consumers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/instance_group_member.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/quota_usages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/reservations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/cell_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/resource_classes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/quota_classes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/instance_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_api/flavors.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_block_device_mapping.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_security_groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_snapshots.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_cells.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_faults.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/migrations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_system_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/share_mapping.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_migrations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_type_extra_specs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_volume_usage_cache.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_extra.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_volume_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_fixed_ips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_faults.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/security_group_instance_association.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_info_caches.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_actions.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/block_device_mapping.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/compute_nodes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instances.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_compute_nodes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/aggregate_hosts.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/agent_builds.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_pci_devices.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/aggregate_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_group_member.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/cells.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/services.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/floating_ips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/volume_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_snapshot_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_quota_usages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_actions_events.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/project_user_quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/virtual_interfaces.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_floating_ips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_aggregate_hosts.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/security_groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_type_projects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_certificates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_agent_builds.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/tags.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_extra.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_reservations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_actions_events.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_security_group_instance_association.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/key_pairs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/console_pools.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_quota_classes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_dns_domains.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_system_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/snapshot_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/fixed_ips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_console_pools.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_info_caches.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_type_extra_specs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_consoles.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/snapshots.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_actions.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_security_group_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_key_pairs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/pci_devices.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_bw_usage_cache.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_aggregate_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/resource_provider_aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_group_policy.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_provider_fw_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/task_log.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/certificates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_s3_images.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/security_group_default_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/bw_usage_cache.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/s3_images.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/inventories.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/dns_domains.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/allocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/provider_fw_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/resource_providers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/console_auth_tokens.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_networks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_group_member.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_virtual_interfaces.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/quota_usages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_security_group_default_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/reservations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/volume_usage_cache.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instances.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/networks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/instance_type_projects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/consoles.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_task_log.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_services.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_instance_group_policy.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/quota_classes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/security_group_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova/shadow_project_user_quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/image_members.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_objects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/tasks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_resource_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_tags.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/node_reference.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_namespaces.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/task_info.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_namespace_resource_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/image_properties.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/cached_images.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/images.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/metadef_properties.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/image_locations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./glance/image_tags.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./performance_schema/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/dnsnameservers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ports.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/consistencyhashes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_fip_policy_bindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/vips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/default_security_group.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ikepolicies.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ipamsubnets.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/dvr_fip_gateway_port_network.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/portdeviceprofiles.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/extradhcpopts.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/address_scopes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/members.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/routerroutes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/routers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/portnumaaffinitypolicies.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ha_router_networks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_bandwidth_limit_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/bgp_speakers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ipamallocationpools.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_gre_allocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ovn_hash_ring.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ovn_revision_numbers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/provisioningblocks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/routerports.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_vlan_allocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ha_router_agent_port_bindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/quotausages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/router_extra_attributes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/bgp_speaker_peer_bindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/resourcedeltas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/nexthops.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/arista_provisioned_tenants.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/sessionpersistences.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/local_ips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/dvr_host_macs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/porthints.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ipallocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/firewalls.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_network_policy_bindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/subnetroutes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/subnet_service_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/meteringlabels.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_minimum_bandwidth_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/subnets.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/tags.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/subnetpools.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/portdnses.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_router_gw_policy_bindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_geneve_endpoints.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/subports.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/router_ndp_proxy_state.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/flavorserviceprofilebindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ndp_proxies.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/vpnservices.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/portsecuritybindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/securitygrouprules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/subnet_dns_publish_fixed_ips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ipamallocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_geneve_allocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/address_groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/networksecuritybindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/addressscoperbacs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/network_segment_ranges.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_policies_default.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/bgp_peers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/pools.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/networksegments.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ipallocationpools.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/networkdnsdomains.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/floatingips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/trunks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/arista_provisioned_vms.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/network_subnet_lock.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_port_binding_levels.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/networkdhcpagentbindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/auto_allocated_topologies.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/addressgrouprbacs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/allowedaddresspairs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/securitygroupdefaultrules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_vxlan_endpoints.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/porthardwareoffloadtype.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/local_ip_associations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/routerl3agentbindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/healthmonitors.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/securitygroups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/externalnetworks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ipsec_site_connections.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/bgp_speaker_dragent_bindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/poolloadbalanceragentbindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/bgp_speaker_network_bindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/arista_provisioned_nets.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/standardattributes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/logs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/firewall_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_port_bindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_gre_endpoints.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/address_associations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/poolmonitorassociations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/subnetpoolprefixes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/meteringlabelrules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_packet_rate_limit_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/floatingipdnses.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qospolicyrbacs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/agents.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/routerrules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/portforwardings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_distributed_port_bindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/reservations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/porttrusted.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/poolstatisticss.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/firewall_policies.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/securitygroupportbindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/networks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/conntrack_helpers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ipsecpeercidrs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_policies.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ha_router_vrid_allocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ipsecpolicies.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/portuplinkstatuspropagation.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/providerresourceassociations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_flat_allocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/serviceprofiles.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/subnetpoolrbacs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_dscp_marking_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_minimum_packet_rate_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/portdataplanestatuses.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/qos_port_policy_bindings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/securitygrouprbacs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/ml2_vxlan_allocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/flavors.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/segmenthostmappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./neutron/networkrbacs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/service_statuses.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/records.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/blacklists.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/zone_attributes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/pool_target_options.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/pool_ns_records.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/zone_transfer_requests.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/zone_tasks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/zone_transfer_accepts.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/tlds.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/tsigkeys.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/pools.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/zones.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/zone_masters.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/recordsets.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/pool_nameservers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/pool_also_notifies.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/pool_targets.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/pool_target_masters.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/pool_attributes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./designate/shared_zones.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/consistencygroups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/transfers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/image_volume_cache_entries.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/group_snapshots.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/volumes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_type_extra_specs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/default_volume_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/workers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/clusters.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/snapshot_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/services.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/backup_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_admin_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_type_projects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_glance_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/group_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/quality_of_service_specs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/group_volume_type_mapping.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/backups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/attachment_specs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/snapshots.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/messages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/group_type_projects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/cgsnapshots.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/encryption.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/quota_usages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/reservations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/group_type_specs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/driver_initiator_data.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/quota_classes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./cinder/volume_attachment.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_block_device_mapping.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_security_groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_snapshots.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_cells.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_faults.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/migrations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_system_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/share_mapping.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_migrations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_type_extra_specs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_volume_usage_cache.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_extra.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_volume_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_fixed_ips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_faults.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/security_group_instance_association.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_info_caches.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_actions.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/block_device_mapping.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/compute_nodes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instances.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_compute_nodes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/aggregate_hosts.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/agent_builds.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_pci_devices.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/aggregate_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_group_member.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/cells.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/services.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/floating_ips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/volume_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_snapshot_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_quota_usages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_actions_events.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/project_user_quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/virtual_interfaces.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_floating_ips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_aggregate_hosts.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/security_groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_type_projects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_certificates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_agent_builds.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/tags.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_extra.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_reservations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_actions_events.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_security_group_instance_association.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/key_pairs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/console_pools.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_quota_classes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_dns_domains.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_system_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/snapshot_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_id_mappings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/fixed_ips.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_console_pools.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_info_caches.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_type_extra_specs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_consoles.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/snapshots.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_actions.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_security_group_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_key_pairs.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/pci_devices.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_bw_usage_cache.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_aggregate_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/resource_provider_aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_group_policy.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_provider_fw_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_groups.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/task_log.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/certificates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_s3_images.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/security_group_default_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/bw_usage_cache.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/s3_images.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/inventories.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/dns_domains.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/allocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/provider_fw_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/resource_providers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/console_auth_tokens.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_networks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_group_member.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_virtual_interfaces.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/quota_usages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_security_group_default_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/reservations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_metadata.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/volume_usage_cache.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instances.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/networks.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/instance_type_projects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/consoles.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_task_log.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_services.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_instance_group_policy.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/quota_classes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/security_group_rules.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./nova_cell0/shadow_project_user_quotas.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/resource_provider_traits.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/alembic_version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/users.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/traits.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/consumer_types.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/projects.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/placement_aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/resource_provider_aggregates.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/inventories.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/allocations.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/resource_providers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/consumers.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./placement/resource_classes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024waits_by_host_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/host_summary_by_statement_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/sys_config.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/schema_table_lock_waits.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/statement_analysis.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/host_summary_by_statement_type.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024io_global_by_wait_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/user_summary.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/schema_unused_indexes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/statements_with_full_table_scans.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024memory_by_thread_by_current_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/session_ssl_status.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024schema_table_statistics_with_buffer.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/innodb_buffer_stats_by_schema.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024memory_by_host_by_current_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/host_summary_by_stages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/user_summary_by_stages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024schema_table_lock_waits.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/schema_auto_increment_columns.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024statement_analysis.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024statements_with_full_table_scans.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024ps_digest_avg_latency_distribution.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024io_global_by_file_by_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/db.opt to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/schema_index_statistics.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/schema_table_statistics_with_buffer.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/session.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024ps_digest_95th_percentile_by_avg_us.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/wait_classes_global_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/innodb_lock_waits.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/host_summary.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024memory_global_by_current_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary_by_file_io.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/waits_global_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/io_global_by_file_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/io_global_by_file_by_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/io_global_by_wait_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/user_summary_by_file_io.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024schema_flattened_keys.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/waits_by_host_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/statements_with_errors_or_warnings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/schema_redundant_indexes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024io_global_by_wait_by_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024innodb_buffer_stats_by_schema.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/io_global_by_wait_by_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/statements_with_sorting.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary_by_statement_type.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024host_summary_by_file_io.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary_by_statement_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024host_summary_by_statement_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024memory_global_total.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024io_global_by_file_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/waits_by_user_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024waits_by_user_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/ps_check_lost_instrumentation.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/wait_classes_global_by_avg_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024innodb_buffer_stats_by_table.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/memory_by_user_by_current_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/schema_table_statistics.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024ps_schema_table_statistics_io.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024innodb_lock_waits.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/schema_tables_with_full_table_scans.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/host_summary_by_file_io.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/version.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/statements_with_temp_tables.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/user_summary_by_statement_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024statements_with_errors_or_warnings.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024host_summary_by_statement_type.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/user_summary_by_file_io_type.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024waits_global_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/memory_by_host_by_current_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024host_summary_by_stages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024memory_by_user_by_current_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024schema_table_statistics.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/memory_by_thread_by_current_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024statements_with_runtimes_in_95th_percentile.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary_by_stages.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024user_summary_by_file_io_type.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024schema_index_statistics.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024host_summary_by_file_io_type.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/latest_file_io.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024latest_file_io.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/x@0024io_by_thread_by_latency.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/memory_global_total.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/memory_global_by_current_bytes.frm to ", "[01] 2025-07-12 20:50:30 ...done", "[01] 2025-07-12 20:50:30 Streaming ./sys/statements_with_runtimes_in_95th_percentile.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/user_summary_by_statement_type.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024schema_tables_with_full_table_scans.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/innodb_buffer_stats_by_table.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024wait_classes_global_by_latency.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/processlist.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024wait_classes_global_by_avg_latency.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024session.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024processlist.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024statements_with_temp_tables.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/host_summary_by_file_io_type.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024statements_with_sorting.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/io_by_thread_by_latency.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/metrics.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/schema_object_overview.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./sys/x@0024host_summary.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/help_relation.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/index_stats.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/help_keyword.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/columns_priv.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/time_zone.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/column_stats.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/db.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/help_category.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/time_zone_leap_second.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/plugin.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/event.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/user.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/roles_mapping.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/procs_priv.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/global_priv.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/db.opt to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/innodb_index_stats.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/general_log.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/gtid_slave_pos.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/time_zone_transition.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/func.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/proc.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/time_zone_name.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/servers.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/transaction_registry.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/proxies_priv.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/help_topic.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/innodb_table_stats.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/slow_log.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/wsrep_allowlist.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/time_zone_transition_type.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/tables_priv.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/wsrep_cluster.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/wsrep_cluster_members.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/wsrep_streaming_log.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./mysql/table_stats.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./horizon/django_migrations.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./horizon/auth_group.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./horizon/django_session.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./horizon/db.opt to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./horizon/auth_group_permissions.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./horizon/auth_permission.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./horizon/django_content_type.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/test_data.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/login_attempt.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/correlation.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/query_history_details.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/cache_data.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/data_source.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/resource_version.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_configuration_history.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_notification.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/file_meta.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_rule_tag.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_version.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/plugin_setting.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/user.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/sso_setting.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/user_external_session.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/resource.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/db.opt to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/org.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/role.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/session.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/short_url.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_public.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/query_history.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/entity_event.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_instance.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/resource_blob.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/folder.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_acl.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/user_auth_token.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/tag.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_image.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/team_member.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/kv_store.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/user_auth.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_rule.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/data_keys.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/temp_user.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_configuration.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/org_user.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/builtin_role.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/resource_history.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/migration_log.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/secrets.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/query_history_star.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/star.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/playlist_item.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/library_element_connection.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/user_role.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/annotation_tag.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_rule_version.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/team.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/permission.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/quota.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/file.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/api_key.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/library_element.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/team_role.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_tag.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_rule_state.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_provisioning.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/server_lock.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/cloud_migration_session.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/ngalert_configuration.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/resource_migration_log.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/dashboard_snapshot.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/anon_device.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/annotation.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/signing_key.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/cloud_migration_resource.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/preferences.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/playlist.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/seed_assignment.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/alert_notification_state.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/provenance_type.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[01] 2025-07-12 20:50:31 Streaming ./grafana/cloud_migration_snapshot.frm to ", "[01] 2025-07-12 20:50:31 ...done", "[00] 2025-07-12 20:50:31 Finished backing up non-InnoDB tables and files", "[00] 2025-07-12 20:50:31 Waiting for log copy thread to read lsn 42893985", "[00] 2025-07-12 20:53:18 Retrying read of log at LSN=42850134", "[00] 2025-07-12 20:53:19 Retrying read of log at LSN=42850134", "[00] 2025-07-12 20:53:21 Retrying read of log at LSN=42850134", "[00] 2025-07-12 20:53:22 Retrying read of log at LSN=42850134", "[00] 2025-07-12 20:53:22 Was only able to copy log from 60383 to 42850134, not 42893985; try increasing innodb_log_file_size", "mariabackup: Stopping log copying thread.[00] 2025-07-12 20:53:22 Retrying read of log at LSN=42850134", ""], "stdout": "Taking a full backup\n", "stdout_lines": ["Taking a full backup"]} 2025-07-12 20:53:24.557734 | orchestrator | 2025-07-12 20:53:24 | INFO  | Task 025b4923-7194-4324-811d-005e29eefe5e (mariadb_backup) was prepared for execution. 2025-07-12 20:53:24.557813 | orchestrator | 2025-07-12 20:53:24 | INFO  | It takes a moment until task 025b4923-7194-4324-811d-005e29eefe5e (mariadb_backup) has been started and output is visible here. 2025-07-12 20:56:17.940750 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 20:56:17.940952 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-07-12 20:56:17.940973 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 20:56:17.940986 | orchestrator | mariadb_bootstrap_restart 2025-07-12 20:56:17.940998 | orchestrator | 2025-07-12 20:56:17.941010 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 20:56:17.941021 | orchestrator | skipping: no hosts matched 2025-07-12 20:56:17.941054 | orchestrator | 2025-07-12 20:56:17.941066 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 20:56:17.941077 | orchestrator | skipping: no hosts matched 2025-07-12 20:56:17.941087 | orchestrator | 2025-07-12 20:56:17.941098 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 20:56:17.941109 | orchestrator | skipping: no hosts matched 2025-07-12 20:56:17.941120 | orchestrator | 2025-07-12 20:56:17.941131 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 20:56:17.941142 | orchestrator | 2025-07-12 20:56:17.941153 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 20:56:17.941164 | orchestrator | Saturday 12 July 2025 20:53:23 +0000 (0:03:08.558) 0:03:13.252 ********* 2025-07-12 20:56:17.941174 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:56:17.941185 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:56:17.941196 | orchestrator | 2025-07-12 20:56:17.941207 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 20:56:17.941217 | orchestrator | Saturday 12 July 2025 20:53:23 +0000 (0:00:00.248) 0:03:13.501 ********* 2025-07-12 20:56:17.941228 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:56:17.941239 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:56:17.941249 | orchestrator | 2025-07-12 20:56:17.941260 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:56:17.941271 | orchestrator | testbed-node-0 : ok=5  changed=0 unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-07-12 20:56:17.941283 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:56:17.941295 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:56:17.941307 | orchestrator | 2025-07-12 20:56:17.941319 | orchestrator | 2025-07-12 20:56:17.941332 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:56:17.941344 | orchestrator | Saturday 12 July 2025 20:53:24 +0000 (0:00:00.150) 0:03:13.652 ********* 2025-07-12 20:56:17.941357 | orchestrator | =============================================================================== 2025-07-12 20:56:17.941369 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 188.56s 2025-07-12 20:56:17.941381 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 2.86s 2025-07-12 20:56:17.941393 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.50s 2025-07-12 20:56:17.941405 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-07-12 20:56:17.941417 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-07-12 20:56:17.941430 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.29s 2025-07-12 20:56:17.941442 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.25s 2025-07-12 20:56:17.941454 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.15s 2025-07-12 20:56:17.941466 | orchestrator | 2025-07-12 20:56:17.941478 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-07-12 20:56:17.941491 | orchestrator | 2025-07-12 20:56:17.941503 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-07-12 20:56:17.941514 | orchestrator | Saturday 12 July 2025 20:53:28 +0000 (0:00:00.204) 0:00:00.204 ********* 2025-07-12 20:56:17.941527 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:56:17.941539 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:56:17.941551 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:56:17.941563 | orchestrator | 2025-07-12 20:56:17.941575 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-07-12 20:56:17.941587 | orchestrator | Saturday 12 July 2025 20:53:28 +0000 (0:00:00.326) 0:00:00.530 ********* 2025-07-12 20:56:17.941607 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-07-12 20:56:17.941619 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-07-12 20:56:17.941632 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-07-12 20:56:17.941644 | orchestrator | 2025-07-12 20:56:17.941655 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-07-12 20:56:17.941665 | orchestrator | 2025-07-12 20:56:17.941676 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-07-12 20:56:17.941686 | orchestrator | Saturday 12 July 2025 20:53:29 +0000 (0:00:00.580) 0:00:01.111 ********* 2025-07-12 20:56:17.941697 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-07-12 20:56:17.941708 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-07-12 20:56:17.941718 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-07-12 20:56:17.941729 | orchestrator | 2025-07-12 20:56:17.941739 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-07-12 20:56:17.941750 | orchestrator | Saturday 12 July 2025 20:53:29 +0000 (0:00:00.391) 0:00:01.503 ********* 2025-07-12 20:56:17.941760 | orchestrator | included: /ansible/roles/mariadb/tasks/backup.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-07-12 20:56:17.941772 | orchestrator | 2025-07-12 20:56:17.941782 | orchestrator | TASK [mariadb : Get MariaDB container facts] *********************************** 2025-07-12 20:56:17.941810 | orchestrator | Saturday 12 July 2025 20:53:30 +0000 (0:00:00.593) 0:00:02.097 ********* 2025-07-12 20:56:17.941821 | orchestrator | ok: [testbed-node-0] 2025-07-12 20:56:17.941838 | orchestrator | ok: [testbed-node-1] 2025-07-12 20:56:17.941849 | orchestrator | ok: [testbed-node-2] 2025-07-12 20:56:17.941860 | orchestrator | 2025-07-12 20:56:17.941871 | orchestrator | TASK [mariadb : Taking full database backup via Mariabackup] ******************* 2025-07-12 20:56:17.941882 | orchestrator | Saturday 12 July 2025 20:53:33 +0000 (0:00:03.193) 0:00:05.290 ********* 2025-07-12 20:56:17.941923 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:56:17.941934 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:56:17.941945 | orchestrator | 2025-07-12 20:56:17.941956 | orchestrator | STILL ALIVE [task 'mariadb : Taking full database backup via Mariabackup' is running] *** 2025-07-12 20:56:17.941967 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-07-12 20:56:17.941978 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_start 2025-07-12 20:56:17.941989 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-07-12 20:56:17.942000 | orchestrator | mariadb_bootstrap_restart 2025-07-12 20:56:17.942011 | orchestrator | changed: [testbed-node-0] 2025-07-12 20:56:17.942074 | orchestrator | 2025-07-12 20:56:17.942086 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-07-12 20:56:17.942098 | orchestrator | skipping: no hosts matched 2025-07-12 20:56:17.942109 | orchestrator | 2025-07-12 20:56:17.942120 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-07-12 20:56:17.942131 | orchestrator | skipping: no hosts matched 2025-07-12 20:56:17.942142 | orchestrator | 2025-07-12 20:56:17.942153 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-07-12 20:56:17.942164 | orchestrator | skipping: no hosts matched 2025-07-12 20:56:17.942174 | orchestrator | 2025-07-12 20:56:17.942185 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-07-12 20:56:17.942196 | orchestrator | 2025-07-12 20:56:17.942207 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-07-12 20:56:17.942218 | orchestrator | Saturday 12 July 2025 20:56:16 +0000 (0:02:43.134) 0:02:48.425 ********* 2025-07-12 20:56:17.942228 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:56:17.942239 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:56:17.942250 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:56:17.942261 | orchestrator | 2025-07-12 20:56:17.942271 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-07-12 20:56:17.942292 | orchestrator | Saturday 12 July 2025 20:56:17 +0000 (0:00:00.322) 0:02:48.747 ********* 2025-07-12 20:56:17.942303 | orchestrator | skipping: [testbed-node-0] 2025-07-12 20:56:17.942314 | orchestrator | skipping: [testbed-node-1] 2025-07-12 20:56:17.942325 | orchestrator | skipping: [testbed-node-2] 2025-07-12 20:56:17.942335 | orchestrator | 2025-07-12 20:56:17.942346 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 20:56:17.942358 | orchestrator | testbed-node-0 : ok=6  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-07-12 20:56:17.942369 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:56:17.942381 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-07-12 20:56:17.942391 | orchestrator | 2025-07-12 20:56:17.942402 | orchestrator | 2025-07-12 20:56:17.942413 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 20:56:17.942424 | orchestrator | Saturday 12 July 2025 20:56:17 +0000 (0:00:00.429) 0:02:49.177 ********* 2025-07-12 20:56:17.942435 | orchestrator | =============================================================================== 2025-07-12 20:56:17.942446 | orchestrator | mariadb : Taking full database backup via Mariabackup ----------------- 163.13s 2025-07-12 20:56:17.942457 | orchestrator | mariadb : Get MariaDB container facts ----------------------------------- 3.19s 2025-07-12 20:56:17.942468 | orchestrator | mariadb : include_tasks ------------------------------------------------- 0.59s 2025-07-12 20:56:17.942478 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.58s 2025-07-12 20:56:17.942489 | orchestrator | Include mariadb post-upgrade.yml ---------------------------------------- 0.43s 2025-07-12 20:56:17.942500 | orchestrator | mariadb : Group MariaDB hosts based on shards --------------------------- 0.39s 2025-07-12 20:56:17.942511 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.33s 2025-07-12 20:56:17.942522 | orchestrator | Include mariadb post-deploy.yml ----------------------------------------- 0.32s 2025-07-12 20:56:18.249467 | orchestrator | + sh -c /opt/configuration/scripts/check/300-openstack.sh 2025-07-12 20:56:18.259478 | orchestrator | + set -e 2025-07-12 20:56:18.259692 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-07-12 20:56:18.259732 | orchestrator | ++ export INTERACTIVE=false 2025-07-12 20:56:18.259766 | orchestrator | ++ INTERACTIVE=false 2025-07-12 20:56:18.259780 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-07-12 20:56:18.259791 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-07-12 20:56:18.259803 | orchestrator | + source /opt/configuration/scripts/manager-version.sh 2025-07-12 20:56:18.259824 | orchestrator | +++ awk '-F: ' '/^manager_version:/ { print $2 }' /opt/configuration/environments/manager/configuration.yml 2025-07-12 20:56:18.266308 | orchestrator | 2025-07-12 20:56:18.266386 | orchestrator | # OpenStack endpoints 2025-07-12 20:56:18.266412 | orchestrator | 2025-07-12 20:56:18.266441 | orchestrator | ++ export MANAGER_VERSION=9.2.0 2025-07-12 20:56:18.266454 | orchestrator | ++ MANAGER_VERSION=9.2.0 2025-07-12 20:56:18.266466 | orchestrator | + export OS_CLOUD=admin 2025-07-12 20:56:18.266476 | orchestrator | + OS_CLOUD=admin 2025-07-12 20:56:18.266488 | orchestrator | + echo 2025-07-12 20:56:18.266499 | orchestrator | + echo '# OpenStack endpoints' 2025-07-12 20:56:18.266510 | orchestrator | + echo 2025-07-12 20:56:18.266521 | orchestrator | + openstack endpoint list 2025-07-12 20:56:21.711853 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 20:56:21.712033 | orchestrator | | ID | Region | Service Name | Service Type | Enabled | Interface | URL | 2025-07-12 20:56:21.712069 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 20:56:21.712082 | orchestrator | | 053eeec82c524af48d9d0d8f2aa9deed | RegionOne | swift | object-store | True | internal | https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-12 20:56:21.712116 | orchestrator | | 0cb876a538db4d8aa04ea3abf9e00e10 | RegionOne | nova | compute | True | internal | https://api-int.testbed.osism.xyz:8774/v2.1 | 2025-07-12 20:56:21.712128 | orchestrator | | 14f74a79c919492891b9f638f8867478 | RegionOne | barbican | key-manager | True | public | https://api.testbed.osism.xyz:9311 | 2025-07-12 20:56:21.712139 | orchestrator | | 1afb29ace4404ba280114fb08a5b359c | RegionOne | keystone | identity | True | public | https://api.testbed.osism.xyz:5000 | 2025-07-12 20:56:21.712150 | orchestrator | | 2249a3dec9f946d1aa9305c5c3a7c9e7 | RegionOne | neutron | network | True | public | https://api.testbed.osism.xyz:9696 | 2025-07-12 20:56:21.712161 | orchestrator | | 3dc2a1ae38df48bf8434ce6ca7dbe7bd | RegionOne | magnum | container-infra | True | public | https://api.testbed.osism.xyz:9511/v1 | 2025-07-12 20:56:21.712172 | orchestrator | | 59c18ac98272431986625df6b4d70db2 | RegionOne | neutron | network | True | internal | https://api-int.testbed.osism.xyz:9696 | 2025-07-12 20:56:21.712183 | orchestrator | | 59f779bd89ac407185890130a1d6da3d | RegionOne | swift | object-store | True | public | https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s | 2025-07-12 20:56:21.712194 | orchestrator | | 5c3a8aff80564171823d45ebb678a864 | RegionOne | placement | placement | True | internal | https://api-int.testbed.osism.xyz:8780 | 2025-07-12 20:56:21.712205 | orchestrator | | 65f58fbafbc24a38b668028261c7e3ca | RegionOne | barbican | key-manager | True | internal | https://api-int.testbed.osism.xyz:9311 | 2025-07-12 20:56:21.712216 | orchestrator | | 6db06a606362486fad55263c6facd236 | RegionOne | glance | image | True | internal | https://api-int.testbed.osism.xyz:9292 | 2025-07-12 20:56:21.712227 | orchestrator | | 783f03a97dfc4570992fbc9b50367ca5 | RegionOne | keystone | identity | True | internal | https://api-int.testbed.osism.xyz:5000 | 2025-07-12 20:56:21.712238 | orchestrator | | 7a7fd1cad12042eea307e40165552488 | RegionOne | cinderv3 | volumev3 | True | internal | https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-12 20:56:21.712249 | orchestrator | | 84470bc1b7194d678769fbd73e5ca697 | RegionOne | octavia | load-balancer | True | public | https://api.testbed.osism.xyz:9876 | 2025-07-12 20:56:21.712259 | orchestrator | | a341898957fc4d509f8281d9c7eddff4 | RegionOne | octavia | load-balancer | True | internal | https://api-int.testbed.osism.xyz:9876 | 2025-07-12 20:56:21.712270 | orchestrator | | a5ec0b69f23d4000b44b774559968511 | RegionOne | cinderv3 | volumev3 | True | public | https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s | 2025-07-12 20:56:21.712281 | orchestrator | | a6bb74c401494c0aad699832a270ff36 | RegionOne | nova | compute | True | public | https://api.testbed.osism.xyz:8774/v2.1 | 2025-07-12 20:56:21.712292 | orchestrator | | ba49b26ca1ac4d32bf236019f1ef0692 | RegionOne | magnum | container-infra | True | internal | https://api-int.testbed.osism.xyz:9511/v1 | 2025-07-12 20:56:21.712303 | orchestrator | | bca4361be73f4092afb4e07e2898aabd | RegionOne | designate | dns | True | internal | https://api-int.testbed.osism.xyz:9001 | 2025-07-12 20:56:21.712314 | orchestrator | | bddb7cc1fde44283a48c67ad09681ce7 | RegionOne | placement | placement | True | public | https://api.testbed.osism.xyz:8780 | 2025-07-12 20:56:21.712344 | orchestrator | | d0884d8cdf2c40d59e7cc5242cb52eb8 | RegionOne | designate | dns | True | public | https://api.testbed.osism.xyz:9001 | 2025-07-12 20:56:21.712366 | orchestrator | | dc863df7b91448cd9b4ff677682e777a | RegionOne | glance | image | True | public | https://api.testbed.osism.xyz:9292 | 2025-07-12 20:56:21.712382 | orchestrator | +----------------------------------+-----------+--------------+-----------------+---------+-----------+---------------------------------------------------------------------+ 2025-07-12 20:56:21.952211 | orchestrator | 2025-07-12 20:56:21.952313 | orchestrator | # Cinder 2025-07-12 20:56:21.952328 | orchestrator | 2025-07-12 20:56:21.952340 | orchestrator | + echo 2025-07-12 20:56:21.952351 | orchestrator | + echo '# Cinder' 2025-07-12 20:56:21.952362 | orchestrator | + echo 2025-07-12 20:56:21.952373 | orchestrator | + openstack volume service list 2025-07-12 20:56:25.320382 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 20:56:25.320518 | orchestrator | | Binary | Host | Zone | Status | State | Updated At | 2025-07-12 20:56:25.320533 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 20:56:25.320542 | orchestrator | | cinder-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-12T20:56:24.000000 | 2025-07-12 20:56:25.320552 | orchestrator | | cinder-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-12T20:56:16.000000 | 2025-07-12 20:56:25.320560 | orchestrator | | cinder-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-12T20:56:16.000000 | 2025-07-12 20:56:25.320569 | orchestrator | | cinder-volume | testbed-node-3@rbd-volumes | nova | enabled | up | 2025-07-12T20:56:15.000000 | 2025-07-12 20:56:25.320578 | orchestrator | | cinder-volume | testbed-node-5@rbd-volumes | nova | enabled | up | 2025-07-12T20:56:15.000000 | 2025-07-12 20:56:25.320587 | orchestrator | | cinder-volume | testbed-node-4@rbd-volumes | nova | enabled | up | 2025-07-12T20:56:16.000000 | 2025-07-12 20:56:25.320596 | orchestrator | | cinder-backup | testbed-node-3 | nova | enabled | up | 2025-07-12T20:56:16.000000 | 2025-07-12 20:56:25.320604 | orchestrator | | cinder-backup | testbed-node-4 | nova | enabled | up | 2025-07-12T20:56:16.000000 | 2025-07-12 20:56:25.320613 | orchestrator | | cinder-backup | testbed-node-5 | nova | enabled | up | 2025-07-12T20:56:16.000000 | 2025-07-12 20:56:25.320622 | orchestrator | +------------------+----------------------------+----------+---------+-------+----------------------------+ 2025-07-12 20:56:25.616685 | orchestrator | 2025-07-12 20:56:25.616784 | orchestrator | # Neutron 2025-07-12 20:56:25.616799 | orchestrator | 2025-07-12 20:56:25.616810 | orchestrator | + echo 2025-07-12 20:56:25.616821 | orchestrator | + echo '# Neutron' 2025-07-12 20:56:25.616832 | orchestrator | + echo 2025-07-12 20:56:25.616844 | orchestrator | + openstack network agent list 2025-07-12 20:56:28.368600 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 20:56:28.368699 | orchestrator | | ID | Agent Type | Host | Availability Zone | Alive | State | Binary | 2025-07-12 20:56:28.368712 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 20:56:28.368723 | orchestrator | | testbed-node-4 | OVN Controller agent | testbed-node-4 | | :-) | UP | ovn-controller | 2025-07-12 20:56:28.368733 | orchestrator | | testbed-node-5 | OVN Controller agent | testbed-node-5 | | :-) | UP | ovn-controller | 2025-07-12 20:56:28.368742 | orchestrator | | testbed-node-3 | OVN Controller agent | testbed-node-3 | | :-) | UP | ovn-controller | 2025-07-12 20:56:28.368765 | orchestrator | | testbed-node-0 | OVN Controller Gateway agent | testbed-node-0 | nova | :-) | UP | ovn-controller | 2025-07-12 20:56:28.368800 | orchestrator | | testbed-node-1 | OVN Controller Gateway agent | testbed-node-1 | nova | :-) | UP | ovn-controller | 2025-07-12 20:56:28.368810 | orchestrator | | testbed-node-2 | OVN Controller Gateway agent | testbed-node-2 | nova | :-) | UP | ovn-controller | 2025-07-12 20:56:28.368820 | orchestrator | | 36b9d21c-9928-5c0a-9b27-73ac7a3e770c | OVN Metadata agent | testbed-node-5 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 20:56:28.368830 | orchestrator | | e645415a-98f5-5758-8cd1-c47af282b5c0 | OVN Metadata agent | testbed-node-3 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 20:56:28.368839 | orchestrator | | 4939696e-6092-5a33-bb73-b850064684df | OVN Metadata agent | testbed-node-4 | | :-) | UP | neutron-ovn-metadata-agent | 2025-07-12 20:56:28.368849 | orchestrator | +--------------------------------------+------------------------------+----------------+-------------------+-------+-------+----------------------------+ 2025-07-12 20:56:28.628458 | orchestrator | + openstack network service provider list 2025-07-12 20:56:31.275324 | orchestrator | +---------------+------+---------+ 2025-07-12 20:56:31.275433 | orchestrator | | Service Type | Name | Default | 2025-07-12 20:56:31.275448 | orchestrator | +---------------+------+---------+ 2025-07-12 20:56:31.275459 | orchestrator | | L3_ROUTER_NAT | ovn | True | 2025-07-12 20:56:31.275471 | orchestrator | +---------------+------+---------+ 2025-07-12 20:56:31.553769 | orchestrator | 2025-07-12 20:56:31.553892 | orchestrator | # Nova 2025-07-12 20:56:31.553907 | orchestrator | 2025-07-12 20:56:31.553981 | orchestrator | + echo 2025-07-12 20:56:31.553992 | orchestrator | + echo '# Nova' 2025-07-12 20:56:31.554002 | orchestrator | + echo 2025-07-12 20:56:31.554065 | orchestrator | + openstack compute service list 2025-07-12 20:56:34.400436 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 20:56:34.400590 | orchestrator | | ID | Binary | Host | Zone | Status | State | Updated At | 2025-07-12 20:56:34.400610 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 20:56:34.400622 | orchestrator | | 633e3c86-54a4-4f09-b469-35dbee36b0fb | nova-scheduler | testbed-node-0 | internal | enabled | up | 2025-07-12T20:56:25.000000 | 2025-07-12 20:56:34.400633 | orchestrator | | 599b2937-6076-4c3b-b6ec-0a62d4a0a37a | nova-scheduler | testbed-node-2 | internal | enabled | up | 2025-07-12T20:56:24.000000 | 2025-07-12 20:56:34.400643 | orchestrator | | 8889174c-6e80-4263-82a7-960c04b76c1c | nova-scheduler | testbed-node-1 | internal | enabled | up | 2025-07-12T20:56:26.000000 | 2025-07-12 20:56:34.400654 | orchestrator | | 1ef3d8a0-2eab-474a-bc45-fdee42344eaf | nova-conductor | testbed-node-0 | internal | enabled | up | 2025-07-12T20:56:26.000000 | 2025-07-12 20:56:34.400665 | orchestrator | | 83c42c76-e7eb-43af-be24-7da60b3517c6 | nova-conductor | testbed-node-1 | internal | enabled | up | 2025-07-12T20:56:29.000000 | 2025-07-12 20:56:34.400676 | orchestrator | | 4597dfb1-1396-414a-ac48-b737e5391a49 | nova-conductor | testbed-node-2 | internal | enabled | up | 2025-07-12T20:56:29.000000 | 2025-07-12 20:56:34.400687 | orchestrator | | 2f758975-b53c-4f9e-8ea0-530656eaef86 | nova-compute | testbed-node-3 | nova | enabled | up | 2025-07-12T20:56:26.000000 | 2025-07-12 20:56:34.400698 | orchestrator | | c2339914-f26d-4e56-bd0c-ed03f24d1cc8 | nova-compute | testbed-node-4 | nova | enabled | up | 2025-07-12T20:56:27.000000 | 2025-07-12 20:56:34.400709 | orchestrator | | 38d4909d-da52-45d7-9cc8-9350ebe49500 | nova-compute | testbed-node-5 | nova | enabled | up | 2025-07-12T20:56:27.000000 | 2025-07-12 20:56:34.400720 | orchestrator | +--------------------------------------+----------------+----------------+----------+---------+-------+----------------------------+ 2025-07-12 20:56:34.684553 | orchestrator | + openstack hypervisor list 2025-07-12 20:56:39.701358 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 20:56:39.701484 | orchestrator | | ID | Hypervisor Hostname | Hypervisor Type | Host IP | State | 2025-07-12 20:56:39.701495 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 20:56:39.701501 | orchestrator | | 08ae39a4-b766-450e-b855-b0acf1e5aaf6 | testbed-node-3 | QEMU | 192.168.16.13 | up | 2025-07-12 20:56:39.701508 | orchestrator | | 1de72216-aa84-4dac-b85e-c2e56cdcebfc | testbed-node-4 | QEMU | 192.168.16.14 | up | 2025-07-12 20:56:39.701515 | orchestrator | | e6831e11-d2e4-4d1e-bbde-f85419cdb859 | testbed-node-5 | QEMU | 192.168.16.15 | up | 2025-07-12 20:56:39.701521 | orchestrator | +--------------------------------------+---------------------+-----------------+---------------+-------+ 2025-07-12 20:56:39.987112 | orchestrator | 2025-07-12 20:56:39.987228 | orchestrator | # Run OpenStack test play 2025-07-12 20:56:39.987252 | orchestrator | 2025-07-12 20:56:39.987270 | orchestrator | + echo 2025-07-12 20:56:39.987284 | orchestrator | + echo '# Run OpenStack test play' 2025-07-12 20:56:39.987296 | orchestrator | + echo 2025-07-12 20:56:39.987306 | orchestrator | + osism apply --environment openstack test 2025-07-12 20:56:41.833716 | orchestrator | 2025-07-12 20:56:41 | INFO  | Trying to run play test in environment openstack 2025-07-12 20:56:41.900663 | orchestrator | 2025-07-12 20:56:41 | INFO  | Task a5e19a7e-e286-493b-8288-1abe2d10de7a (test) was prepared for execution. 2025-07-12 20:56:41.900776 | orchestrator | 2025-07-12 20:56:41 | INFO  | It takes a moment until task a5e19a7e-e286-493b-8288-1abe2d10de7a (test) has been started and output is visible here. 2025-07-12 21:02:32.961479 | orchestrator | 2025-07-12 21:02:32.961615 | orchestrator | PLAY [Create test project] ***************************************************** 2025-07-12 21:02:32.961643 | orchestrator | 2025-07-12 21:02:32.961663 | orchestrator | TASK [Create test domain] ****************************************************** 2025-07-12 21:02:32.961682 | orchestrator | Saturday 12 July 2025 20:56:45 +0000 (0:00:00.076) 0:00:00.077 ********* 2025-07-12 21:02:32.961701 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.961721 | orchestrator | 2025-07-12 21:02:32.961741 | orchestrator | TASK [Create test-admin user] ************************************************** 2025-07-12 21:02:32.961760 | orchestrator | Saturday 12 July 2025 20:56:49 +0000 (0:00:03.802) 0:00:03.879 ********* 2025-07-12 21:02:32.961778 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.961797 | orchestrator | 2025-07-12 21:02:32.961815 | orchestrator | TASK [Add manager role to user test-admin] ************************************* 2025-07-12 21:02:32.961835 | orchestrator | Saturday 12 July 2025 20:56:54 +0000 (0:00:04.337) 0:00:08.216 ********* 2025-07-12 21:02:32.961853 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.961871 | orchestrator | 2025-07-12 21:02:32.961889 | orchestrator | TASK [Create test project] ***************************************************** 2025-07-12 21:02:32.961908 | orchestrator | Saturday 12 July 2025 20:57:00 +0000 (0:00:06.475) 0:00:14.692 ********* 2025-07-12 21:02:32.961927 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.961946 | orchestrator | 2025-07-12 21:02:32.961965 | orchestrator | TASK [Create test user] ******************************************************** 2025-07-12 21:02:32.961984 | orchestrator | Saturday 12 July 2025 20:57:04 +0000 (0:00:04.013) 0:00:18.706 ********* 2025-07-12 21:02:32.962004 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.962173 | orchestrator | 2025-07-12 21:02:32.962197 | orchestrator | TASK [Add member roles to user test] ******************************************* 2025-07-12 21:02:32.962214 | orchestrator | Saturday 12 July 2025 20:57:08 +0000 (0:00:04.174) 0:00:22.880 ********* 2025-07-12 21:02:32.962232 | orchestrator | changed: [localhost] => (item=load-balancer_member) 2025-07-12 21:02:32.962270 | orchestrator | changed: [localhost] => (item=member) 2025-07-12 21:02:32.962291 | orchestrator | changed: [localhost] => (item=creator) 2025-07-12 21:02:32.962310 | orchestrator | 2025-07-12 21:02:32.962329 | orchestrator | TASK [Create test server group] ************************************************ 2025-07-12 21:02:32.962348 | orchestrator | Saturday 12 July 2025 20:57:20 +0000 (0:00:12.087) 0:00:34.967 ********* 2025-07-12 21:02:32.962401 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.962420 | orchestrator | 2025-07-12 21:02:32.962439 | orchestrator | TASK [Create ssh security group] *********************************************** 2025-07-12 21:02:32.962457 | orchestrator | Saturday 12 July 2025 20:57:25 +0000 (0:00:04.324) 0:00:39.292 ********* 2025-07-12 21:02:32.962476 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.962489 | orchestrator | 2025-07-12 21:02:32.962500 | orchestrator | TASK [Add rule to ssh security group] ****************************************** 2025-07-12 21:02:32.962511 | orchestrator | Saturday 12 July 2025 20:57:29 +0000 (0:00:04.752) 0:00:44.045 ********* 2025-07-12 21:02:32.962521 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.962532 | orchestrator | 2025-07-12 21:02:32.962543 | orchestrator | TASK [Create icmp security group] ********************************************** 2025-07-12 21:02:32.962554 | orchestrator | Saturday 12 July 2025 20:57:34 +0000 (0:00:04.225) 0:00:48.270 ********* 2025-07-12 21:02:32.962565 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.962576 | orchestrator | 2025-07-12 21:02:32.962587 | orchestrator | TASK [Add rule to icmp security group] ***************************************** 2025-07-12 21:02:32.962598 | orchestrator | Saturday 12 July 2025 20:57:38 +0000 (0:00:03.989) 0:00:52.259 ********* 2025-07-12 21:02:32.962608 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.962619 | orchestrator | 2025-07-12 21:02:32.962630 | orchestrator | TASK [Create test keypair] ***************************************************** 2025-07-12 21:02:32.962641 | orchestrator | Saturday 12 July 2025 20:57:42 +0000 (0:00:04.100) 0:00:56.359 ********* 2025-07-12 21:02:32.962652 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.962663 | orchestrator | 2025-07-12 21:02:32.962674 | orchestrator | TASK [Create test network topology] ******************************************** 2025-07-12 21:02:32.962685 | orchestrator | Saturday 12 July 2025 20:57:46 +0000 (0:00:04.306) 0:01:00.666 ********* 2025-07-12 21:02:32.962696 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.962706 | orchestrator | 2025-07-12 21:02:32.962717 | orchestrator | TASK [Create test instances] *************************************************** 2025-07-12 21:02:32.962728 | orchestrator | Saturday 12 July 2025 20:58:01 +0000 (0:00:15.133) 0:01:15.800 ********* 2025-07-12 21:02:32.962739 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 21:02:32.962750 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 21:02:32.962764 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 21:02:32.962782 | orchestrator | 2025-07-12 21:02:32.962801 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 21:02:32.962817 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 21:02:32.962834 | orchestrator | 2025-07-12 21:02:32.962850 | orchestrator | STILL ALIVE [task 'Create test instances' is running] ************************** 2025-07-12 21:02:32.962867 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 21:02:32.962884 | orchestrator | 2025-07-12 21:02:32.962901 | orchestrator | TASK [Add metadata to instances] *********************************************** 2025-07-12 21:02:32.962919 | orchestrator | Saturday 12 July 2025 21:01:09 +0000 (0:03:07.414) 0:04:23.214 ********* 2025-07-12 21:02:32.962939 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 21:02:32.962958 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 21:02:32.962976 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 21:02:32.962988 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 21:02:32.962998 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 21:02:32.963009 | orchestrator | 2025-07-12 21:02:32.963020 | orchestrator | TASK [Add tag to instances] **************************************************** 2025-07-12 21:02:32.963032 | orchestrator | Saturday 12 July 2025 21:01:32 +0000 (0:00:23.626) 0:04:46.841 ********* 2025-07-12 21:02:32.963044 | orchestrator | changed: [localhost] => (item=test) 2025-07-12 21:02:32.963063 | orchestrator | changed: [localhost] => (item=test-1) 2025-07-12 21:02:32.963104 | orchestrator | changed: [localhost] => (item=test-2) 2025-07-12 21:02:32.963125 | orchestrator | changed: [localhost] => (item=test-3) 2025-07-12 21:02:32.963168 | orchestrator | changed: [localhost] => (item=test-4) 2025-07-12 21:02:32.963209 | orchestrator | 2025-07-12 21:02:32.963233 | orchestrator | TASK [Create test volume] ****************************************************** 2025-07-12 21:02:32.963250 | orchestrator | Saturday 12 July 2025 21:02:06 +0000 (0:00:34.006) 0:05:20.847 ********* 2025-07-12 21:02:32.963269 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.963298 | orchestrator | 2025-07-12 21:02:32.963317 | orchestrator | TASK [Attach test volume] ****************************************************** 2025-07-12 21:02:32.963335 | orchestrator | Saturday 12 July 2025 21:02:14 +0000 (0:00:07.371) 0:05:28.218 ********* 2025-07-12 21:02:32.963361 | orchestrator | changed: [localhost] 2025-07-12 21:02:32.963383 | orchestrator | 2025-07-12 21:02:32.963412 | orchestrator | TASK [Create floating ip address] ********************************************** 2025-07-12 21:02:32.963435 | orchestrator | Saturday 12 July 2025 21:02:27 +0000 (0:00:13.492) 0:05:41.711 ********* 2025-07-12 21:02:32.963460 | orchestrator | ok: [localhost] 2025-07-12 21:02:32.963478 | orchestrator | 2025-07-12 21:02:32.963496 | orchestrator | TASK [Print floating ip address] *********************************************** 2025-07-12 21:02:32.963514 | orchestrator | Saturday 12 July 2025 21:02:32 +0000 (0:00:05.066) 0:05:46.777 ********* 2025-07-12 21:02:32.963540 | orchestrator | ok: [localhost] => { 2025-07-12 21:02:32.963563 | orchestrator |  "msg": "192.168.112.178" 2025-07-12 21:02:32.963582 | orchestrator | } 2025-07-12 21:02:32.963602 | orchestrator | 2025-07-12 21:02:32.963620 | orchestrator | PLAY RECAP ********************************************************************* 2025-07-12 21:02:32.963639 | orchestrator | localhost : ok=20  changed=18  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-07-12 21:02:32.963660 | orchestrator | 2025-07-12 21:02:32.963678 | orchestrator | 2025-07-12 21:02:32.963697 | orchestrator | TASKS RECAP ******************************************************************** 2025-07-12 21:02:32.963716 | orchestrator | Saturday 12 July 2025 21:02:32 +0000 (0:00:00.040) 0:05:46.817 ********* 2025-07-12 21:02:32.963747 | orchestrator | =============================================================================== 2025-07-12 21:02:32.963767 | orchestrator | Create test instances ------------------------------------------------- 187.41s 2025-07-12 21:02:32.963786 | orchestrator | Add tag to instances --------------------------------------------------- 34.01s 2025-07-12 21:02:32.963805 | orchestrator | Add metadata to instances ---------------------------------------------- 23.63s 2025-07-12 21:02:32.963827 | orchestrator | Create test network topology ------------------------------------------- 15.13s 2025-07-12 21:02:32.963857 | orchestrator | Attach test volume ----------------------------------------------------- 13.49s 2025-07-12 21:02:32.963887 | orchestrator | Add member roles to user test ------------------------------------------ 12.09s 2025-07-12 21:02:32.963910 | orchestrator | Create test volume ------------------------------------------------------ 7.37s 2025-07-12 21:02:32.963929 | orchestrator | Add manager role to user test-admin ------------------------------------- 6.48s 2025-07-12 21:02:32.963947 | orchestrator | Create floating ip address ---------------------------------------------- 5.07s 2025-07-12 21:02:32.963965 | orchestrator | Create ssh security group ----------------------------------------------- 4.75s 2025-07-12 21:02:32.963983 | orchestrator | Create test-admin user -------------------------------------------------- 4.34s 2025-07-12 21:02:32.964001 | orchestrator | Create test server group ------------------------------------------------ 4.32s 2025-07-12 21:02:32.964018 | orchestrator | Create test keypair ----------------------------------------------------- 4.31s 2025-07-12 21:02:32.964037 | orchestrator | Add rule to ssh security group ------------------------------------------ 4.23s 2025-07-12 21:02:32.964053 | orchestrator | Create test user -------------------------------------------------------- 4.17s 2025-07-12 21:02:32.964072 | orchestrator | Add rule to icmp security group ----------------------------------------- 4.10s 2025-07-12 21:02:32.964119 | orchestrator | Create test project ----------------------------------------------------- 4.01s 2025-07-12 21:02:32.964138 | orchestrator | Create icmp security group ---------------------------------------------- 3.99s 2025-07-12 21:02:32.964157 | orchestrator | Create test domain ------------------------------------------------------ 3.80s 2025-07-12 21:02:32.964188 | orchestrator | Print floating ip address ----------------------------------------------- 0.04s 2025-07-12 21:02:33.270258 | orchestrator | + server_list 2025-07-12 21:02:33.270350 | orchestrator | + openstack --os-cloud test server list 2025-07-12 21:02:36.995462 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 21:02:36.995574 | orchestrator | | ID | Name | Status | Networks | Image | Flavor | 2025-07-12 21:02:36.995588 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 21:02:36.995600 | orchestrator | | 550e32e3-64ea-4bb3-847b-bad5f1b31396 | test-4 | ACTIVE | auto_allocated_network=10.42.0.10, 192.168.112.105 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 21:02:36.995611 | orchestrator | | d65cc2d3-3f0f-4cfa-a715-6742de23350a | test-3 | ACTIVE | auto_allocated_network=10.42.0.3, 192.168.112.141 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 21:02:36.995622 | orchestrator | | 5b6a9932-fc8f-4b69-a79f-ee01076f9597 | test-2 | ACTIVE | auto_allocated_network=10.42.0.46, 192.168.112.108 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 21:02:36.995634 | orchestrator | | f7129dab-37ac-4a45-8ef8-017beb72c348 | test-1 | ACTIVE | auto_allocated_network=10.42.0.36, 192.168.112.140 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 21:02:36.995645 | orchestrator | | b86333af-7802-4c20-a763-19773f92bf4f | test | ACTIVE | auto_allocated_network=10.42.0.14, 192.168.112.178 | Cirros 0.6.2 | SCS-1L-1-5 | 2025-07-12 21:02:36.995656 | orchestrator | +--------------------------------------+--------+--------+----------------------------------------------------+--------------+------------+ 2025-07-12 21:02:37.286553 | orchestrator | + openstack --os-cloud test server show test 2025-07-12 21:02:40.629156 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:40.629271 | orchestrator | | Field | Value | 2025-07-12 21:02:40.629289 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:40.629317 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 21:02:40.629329 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 21:02:40.629341 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 21:02:40.629373 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test | 2025-07-12 21:02:40.629385 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 21:02:40.629397 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 21:02:40.629408 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 21:02:40.629419 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 21:02:40.629448 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 21:02:40.629543 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 21:02:40.629559 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 21:02:40.629577 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 21:02:40.629589 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 21:02:40.629600 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 21:02:40.629622 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 21:02:40.629633 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T20:58:29.000000 | 2025-07-12 21:02:40.629644 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 21:02:40.629656 | orchestrator | | accessIPv4 | | 2025-07-12 21:02:40.629670 | orchestrator | | accessIPv6 | | 2025-07-12 21:02:40.629684 | orchestrator | | addresses | auto_allocated_network=10.42.0.14, 192.168.112.178 | 2025-07-12 21:02:40.629706 | orchestrator | | config_drive | | 2025-07-12 21:02:40.629719 | orchestrator | | created | 2025-07-12T20:58:09Z | 2025-07-12 21:02:40.629732 | orchestrator | | description | None | 2025-07-12 21:02:40.629751 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 21:02:40.629764 | orchestrator | | hostId | 6e44ee2b1e840eb62d24f5580edda15185d8e3578552ab366b36153e | 2025-07-12 21:02:40.629785 | orchestrator | | host_status | None | 2025-07-12 21:02:40.629798 | orchestrator | | id | b86333af-7802-4c20-a763-19773f92bf4f | 2025-07-12 21:02:40.629811 | orchestrator | | image | Cirros 0.6.2 (7466eb57-d941-44df-8e24-295153b95278) | 2025-07-12 21:02:40.629823 | orchestrator | | key_name | test | 2025-07-12 21:02:40.629836 | orchestrator | | locked | False | 2025-07-12 21:02:40.629848 | orchestrator | | locked_reason | None | 2025-07-12 21:02:40.629862 | orchestrator | | name | test | 2025-07-12 21:02:40.629882 | orchestrator | | pinned_availability_zone | None | 2025-07-12 21:02:40.629895 | orchestrator | | progress | 0 | 2025-07-12 21:02:40.629908 | orchestrator | | project_id | 2b56b716674646668cbc73084993db5d | 2025-07-12 21:02:40.629926 | orchestrator | | properties | hostname='test' | 2025-07-12 21:02:40.629946 | orchestrator | | security_groups | name='ssh' | 2025-07-12 21:02:40.629959 | orchestrator | | | name='icmp' | 2025-07-12 21:02:40.629972 | orchestrator | | server_groups | None | 2025-07-12 21:02:40.629984 | orchestrator | | status | ACTIVE | 2025-07-12 21:02:40.629997 | orchestrator | | tags | test | 2025-07-12 21:02:40.630010 | orchestrator | | trusted_image_certificates | None | 2025-07-12 21:02:40.630075 | orchestrator | | updated | 2025-07-12T21:01:14Z | 2025-07-12 21:02:40.630133 | orchestrator | | user_id | 4ad79f9999c949c8a0809a080b26fc30 | 2025-07-12 21:02:40.630146 | orchestrator | | volumes_attached | delete_on_termination='False', id='da820934-eb1f-4c36-b809-8fe2ad67f20b' | 2025-07-12 21:02:40.634277 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:40.910582 | orchestrator | + openstack --os-cloud test server show test-1 2025-07-12 21:02:44.158325 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:44.158434 | orchestrator | | Field | Value | 2025-07-12 21:02:44.158451 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:44.158464 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 21:02:44.158476 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 21:02:44.158487 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 21:02:44.158499 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-1 | 2025-07-12 21:02:44.158510 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 21:02:44.158521 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 21:02:44.158533 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 21:02:44.158545 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 21:02:44.158597 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 21:02:44.158616 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 21:02:44.158628 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 21:02:44.158639 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 21:02:44.158651 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 21:02:44.158662 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 21:02:44.158676 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 21:02:44.158689 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T20:59:12.000000 | 2025-07-12 21:02:44.158702 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 21:02:44.158715 | orchestrator | | accessIPv4 | | 2025-07-12 21:02:44.158728 | orchestrator | | accessIPv6 | | 2025-07-12 21:02:44.158750 | orchestrator | | addresses | auto_allocated_network=10.42.0.36, 192.168.112.140 | 2025-07-12 21:02:44.158770 | orchestrator | | config_drive | | 2025-07-12 21:02:44.158789 | orchestrator | | created | 2025-07-12T20:58:51Z | 2025-07-12 21:02:44.158803 | orchestrator | | description | None | 2025-07-12 21:02:44.158815 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 21:02:44.158828 | orchestrator | | hostId | b129da93cdf9c9873a6070b4f26624c9b5f26ee437fde3afbe50b951 | 2025-07-12 21:02:44.158841 | orchestrator | | host_status | None | 2025-07-12 21:02:44.158854 | orchestrator | | id | f7129dab-37ac-4a45-8ef8-017beb72c348 | 2025-07-12 21:02:44.158867 | orchestrator | | image | Cirros 0.6.2 (7466eb57-d941-44df-8e24-295153b95278) | 2025-07-12 21:02:44.158880 | orchestrator | | key_name | test | 2025-07-12 21:02:44.158893 | orchestrator | | locked | False | 2025-07-12 21:02:44.158921 | orchestrator | | locked_reason | None | 2025-07-12 21:02:44.158935 | orchestrator | | name | test-1 | 2025-07-12 21:02:44.158958 | orchestrator | | pinned_availability_zone | None | 2025-07-12 21:02:44.158973 | orchestrator | | progress | 0 | 2025-07-12 21:02:44.158986 | orchestrator | | project_id | 2b56b716674646668cbc73084993db5d | 2025-07-12 21:02:44.158999 | orchestrator | | properties | hostname='test-1' | 2025-07-12 21:02:44.159012 | orchestrator | | security_groups | name='ssh' | 2025-07-12 21:02:44.159025 | orchestrator | | | name='icmp' | 2025-07-12 21:02:44.159045 | orchestrator | | server_groups | None | 2025-07-12 21:02:44.159064 | orchestrator | | status | ACTIVE | 2025-07-12 21:02:44.159116 | orchestrator | | tags | test | 2025-07-12 21:02:44.159139 | orchestrator | | trusted_image_certificates | None | 2025-07-12 21:02:44.159159 | orchestrator | | updated | 2025-07-12T21:01:18Z | 2025-07-12 21:02:44.159188 | orchestrator | | user_id | 4ad79f9999c949c8a0809a080b26fc30 | 2025-07-12 21:02:44.159215 | orchestrator | | volumes_attached | | 2025-07-12 21:02:44.162432 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:44.481976 | orchestrator | + openstack --os-cloud test server show test-2 2025-07-12 21:02:47.583170 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:47.583291 | orchestrator | | Field | Value | 2025-07-12 21:02:47.583307 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:47.583319 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 21:02:47.583331 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 21:02:47.583365 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 21:02:47.583378 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-2 | 2025-07-12 21:02:47.583389 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 21:02:47.583400 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 21:02:47.583426 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 21:02:47.583438 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 21:02:47.583466 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 21:02:47.583478 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 21:02:47.583490 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 21:02:47.583501 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 21:02:47.583512 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 21:02:47.583533 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 21:02:47.583544 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 21:02:47.583556 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T20:59:53.000000 | 2025-07-12 21:02:47.583567 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 21:02:47.583578 | orchestrator | | accessIPv4 | | 2025-07-12 21:02:47.583594 | orchestrator | | accessIPv6 | | 2025-07-12 21:02:47.583606 | orchestrator | | addresses | auto_allocated_network=10.42.0.46, 192.168.112.108 | 2025-07-12 21:02:47.583624 | orchestrator | | config_drive | | 2025-07-12 21:02:47.583636 | orchestrator | | created | 2025-07-12T20:59:31Z | 2025-07-12 21:02:47.583647 | orchestrator | | description | None | 2025-07-12 21:02:47.583658 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 21:02:47.583674 | orchestrator | | hostId | 34a1fc7731672dd30b99c62d545357e210e04680ca869ac1915880cb | 2025-07-12 21:02:47.583688 | orchestrator | | host_status | None | 2025-07-12 21:02:47.583701 | orchestrator | | id | 5b6a9932-fc8f-4b69-a79f-ee01076f9597 | 2025-07-12 21:02:47.583713 | orchestrator | | image | Cirros 0.6.2 (7466eb57-d941-44df-8e24-295153b95278) | 2025-07-12 21:02:47.583726 | orchestrator | | key_name | test | 2025-07-12 21:02:47.583740 | orchestrator | | locked | False | 2025-07-12 21:02:47.583758 | orchestrator | | locked_reason | None | 2025-07-12 21:02:47.583771 | orchestrator | | name | test-2 | 2025-07-12 21:02:47.583791 | orchestrator | | pinned_availability_zone | None | 2025-07-12 21:02:47.583805 | orchestrator | | progress | 0 | 2025-07-12 21:02:47.583824 | orchestrator | | project_id | 2b56b716674646668cbc73084993db5d | 2025-07-12 21:02:47.583837 | orchestrator | | properties | hostname='test-2' | 2025-07-12 21:02:47.583866 | orchestrator | | security_groups | name='ssh' | 2025-07-12 21:02:47.583880 | orchestrator | | | name='icmp' | 2025-07-12 21:02:47.583903 | orchestrator | | server_groups | None | 2025-07-12 21:02:47.583921 | orchestrator | | status | ACTIVE | 2025-07-12 21:02:47.583942 | orchestrator | | tags | test | 2025-07-12 21:02:47.583971 | orchestrator | | trusted_image_certificates | None | 2025-07-12 21:02:47.583992 | orchestrator | | updated | 2025-07-12T21:01:23Z | 2025-07-12 21:02:47.584020 | orchestrator | | user_id | 4ad79f9999c949c8a0809a080b26fc30 | 2025-07-12 21:02:47.584041 | orchestrator | | volumes_attached | | 2025-07-12 21:02:47.587711 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:47.870608 | orchestrator | + openstack --os-cloud test server show test-3 2025-07-12 21:02:51.057487 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:51.057617 | orchestrator | | Field | Value | 2025-07-12 21:02:51.057642 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:51.057663 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 21:02:51.057684 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 21:02:51.057704 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 21:02:51.057725 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-3 | 2025-07-12 21:02:51.057746 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 21:02:51.057768 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 21:02:51.057789 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 21:02:51.057842 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 21:02:51.057886 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 21:02:51.057908 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 21:02:51.057982 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 21:02:51.058003 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 21:02:51.058145 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 21:02:51.058169 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 21:02:51.058189 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 21:02:51.058208 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T21:00:26.000000 | 2025-07-12 21:02:51.058238 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 21:02:51.058274 | orchestrator | | accessIPv4 | | 2025-07-12 21:02:51.058294 | orchestrator | | accessIPv6 | | 2025-07-12 21:02:51.058315 | orchestrator | | addresses | auto_allocated_network=10.42.0.3, 192.168.112.141 | 2025-07-12 21:02:51.058350 | orchestrator | | config_drive | | 2025-07-12 21:02:51.058373 | orchestrator | | created | 2025-07-12T21:00:10Z | 2025-07-12 21:02:51.058393 | orchestrator | | description | None | 2025-07-12 21:02:51.058414 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 21:02:51.058434 | orchestrator | | hostId | 6e44ee2b1e840eb62d24f5580edda15185d8e3578552ab366b36153e | 2025-07-12 21:02:51.058455 | orchestrator | | host_status | None | 2025-07-12 21:02:51.058475 | orchestrator | | id | d65cc2d3-3f0f-4cfa-a715-6742de23350a | 2025-07-12 21:02:51.058504 | orchestrator | | image | Cirros 0.6.2 (7466eb57-d941-44df-8e24-295153b95278) | 2025-07-12 21:02:51.058545 | orchestrator | | key_name | test | 2025-07-12 21:02:51.058566 | orchestrator | | locked | False | 2025-07-12 21:02:51.058585 | orchestrator | | locked_reason | None | 2025-07-12 21:02:51.058605 | orchestrator | | name | test-3 | 2025-07-12 21:02:51.058635 | orchestrator | | pinned_availability_zone | None | 2025-07-12 21:02:51.058656 | orchestrator | | progress | 0 | 2025-07-12 21:02:51.058676 | orchestrator | | project_id | 2b56b716674646668cbc73084993db5d | 2025-07-12 21:02:51.058696 | orchestrator | | properties | hostname='test-3' | 2025-07-12 21:02:51.058717 | orchestrator | | security_groups | name='ssh' | 2025-07-12 21:02:51.058737 | orchestrator | | | name='icmp' | 2025-07-12 21:02:51.058756 | orchestrator | | server_groups | None | 2025-07-12 21:02:51.058795 | orchestrator | | status | ACTIVE | 2025-07-12 21:02:51.058817 | orchestrator | | tags | test | 2025-07-12 21:02:51.058837 | orchestrator | | trusted_image_certificates | None | 2025-07-12 21:02:51.058857 | orchestrator | | updated | 2025-07-12T21:01:27Z | 2025-07-12 21:02:51.058887 | orchestrator | | user_id | 4ad79f9999c949c8a0809a080b26fc30 | 2025-07-12 21:02:51.058907 | orchestrator | | volumes_attached | | 2025-07-12 21:02:51.062071 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:51.372871 | orchestrator | + openstack --os-cloud test server show test-4 2025-07-12 21:02:54.393219 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:54.393335 | orchestrator | | Field | Value | 2025-07-12 21:02:54.393352 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:54.393364 | orchestrator | | OS-DCF:diskConfig | MANUAL | 2025-07-12 21:02:54.393413 | orchestrator | | OS-EXT-AZ:availability_zone | nova | 2025-07-12 21:02:54.393427 | orchestrator | | OS-EXT-SRV-ATTR:host | None | 2025-07-12 21:02:54.393438 | orchestrator | | OS-EXT-SRV-ATTR:hostname | test-4 | 2025-07-12 21:02:54.393449 | orchestrator | | OS-EXT-SRV-ATTR:hypervisor_hostname | None | 2025-07-12 21:02:54.393461 | orchestrator | | OS-EXT-SRV-ATTR:instance_name | None | 2025-07-12 21:02:54.393472 | orchestrator | | OS-EXT-SRV-ATTR:kernel_id | None | 2025-07-12 21:02:54.393483 | orchestrator | | OS-EXT-SRV-ATTR:launch_index | None | 2025-07-12 21:02:54.393513 | orchestrator | | OS-EXT-SRV-ATTR:ramdisk_id | None | 2025-07-12 21:02:54.393526 | orchestrator | | OS-EXT-SRV-ATTR:reservation_id | None | 2025-07-12 21:02:54.393537 | orchestrator | | OS-EXT-SRV-ATTR:root_device_name | None | 2025-07-12 21:02:54.393556 | orchestrator | | OS-EXT-SRV-ATTR:user_data | None | 2025-07-12 21:02:54.393567 | orchestrator | | OS-EXT-STS:power_state | Running | 2025-07-12 21:02:54.393583 | orchestrator | | OS-EXT-STS:task_state | None | 2025-07-12 21:02:54.393595 | orchestrator | | OS-EXT-STS:vm_state | active | 2025-07-12 21:02:54.393606 | orchestrator | | OS-SRV-USG:launched_at | 2025-07-12T21:00:57.000000 | 2025-07-12 21:02:54.393617 | orchestrator | | OS-SRV-USG:terminated_at | None | 2025-07-12 21:02:54.393629 | orchestrator | | accessIPv4 | | 2025-07-12 21:02:54.393640 | orchestrator | | accessIPv6 | | 2025-07-12 21:02:54.393652 | orchestrator | | addresses | auto_allocated_network=10.42.0.10, 192.168.112.105 | 2025-07-12 21:02:54.393670 | orchestrator | | config_drive | | 2025-07-12 21:02:54.393684 | orchestrator | | created | 2025-07-12T21:00:42Z | 2025-07-12 21:02:54.393704 | orchestrator | | description | None | 2025-07-12 21:02:54.393718 | orchestrator | | flavor | description=, disk='5', ephemeral='0', extra_specs.scs:cpu-type='crowded-core', extra_specs.scs:name-v1='SCS-1L:1:5', extra_specs.scs:name-v2='SCS-1L-1-5', id='SCS-1L-1-5', is_disabled=, is_public='True', location=, name='SCS-1L-1-5', original_name='SCS-1L-1-5', ram='1024', rxtx_factor=, swap='0', vcpus='1' | 2025-07-12 21:02:54.393731 | orchestrator | | hostId | 34a1fc7731672dd30b99c62d545357e210e04680ca869ac1915880cb | 2025-07-12 21:02:54.393749 | orchestrator | | host_status | None | 2025-07-12 21:02:54.393763 | orchestrator | | id | 550e32e3-64ea-4bb3-847b-bad5f1b31396 | 2025-07-12 21:02:54.393776 | orchestrator | | image | Cirros 0.6.2 (7466eb57-d941-44df-8e24-295153b95278) | 2025-07-12 21:02:54.393788 | orchestrator | | key_name | test | 2025-07-12 21:02:54.393801 | orchestrator | | locked | False | 2025-07-12 21:02:54.393814 | orchestrator | | locked_reason | None | 2025-07-12 21:02:54.393826 | orchestrator | | name | test-4 | 2025-07-12 21:02:54.393845 | orchestrator | | pinned_availability_zone | None | 2025-07-12 21:02:54.393865 | orchestrator | | progress | 0 | 2025-07-12 21:02:54.393878 | orchestrator | | project_id | 2b56b716674646668cbc73084993db5d | 2025-07-12 21:02:54.393890 | orchestrator | | properties | hostname='test-4' | 2025-07-12 21:02:54.393907 | orchestrator | | security_groups | name='ssh' | 2025-07-12 21:02:54.393920 | orchestrator | | | name='icmp' | 2025-07-12 21:02:54.393931 | orchestrator | | server_groups | None | 2025-07-12 21:02:54.393942 | orchestrator | | status | ACTIVE | 2025-07-12 21:02:54.393953 | orchestrator | | tags | test | 2025-07-12 21:02:54.393965 | orchestrator | | trusted_image_certificates | None | 2025-07-12 21:02:54.393976 | orchestrator | | updated | 2025-07-12T21:01:32Z | 2025-07-12 21:02:54.393992 | orchestrator | | user_id | 4ad79f9999c949c8a0809a080b26fc30 | 2025-07-12 21:02:54.394010 | orchestrator | | volumes_attached | | 2025-07-12 21:02:54.397980 | orchestrator | +-------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 2025-07-12 21:02:54.678707 | orchestrator | + server_ping 2025-07-12 21:02:54.679543 | orchestrator | ++ openstack --os-cloud test floating ip list --status ACTIVE -f value -c 'Floating IP Address' 2025-07-12 21:02:54.679596 | orchestrator | ++ tr -d '\r' 2025-07-12 21:02:57.579629 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 21:02:57.579736 | orchestrator | + ping -c3 192.168.112.108 2025-07-12 21:02:57.593932 | orchestrator | PING 192.168.112.108 (192.168.112.108) 56(84) bytes of data. 2025-07-12 21:02:57.594088 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=1 ttl=63 time=8.08 ms 2025-07-12 21:02:58.589672 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=2 ttl=63 time=2.58 ms 2025-07-12 21:02:59.591962 | orchestrator | 64 bytes from 192.168.112.108: icmp_seq=3 ttl=63 time=2.10 ms 2025-07-12 21:02:59.592069 | orchestrator | 2025-07-12 21:02:59.592086 | orchestrator | --- 192.168.112.108 ping statistics --- 2025-07-12 21:02:59.592100 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 21:02:59.592111 | orchestrator | rtt min/avg/max/mdev = 2.104/4.255/8.084/2.714 ms 2025-07-12 21:02:59.592164 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 21:02:59.592176 | orchestrator | + ping -c3 192.168.112.105 2025-07-12 21:02:59.604323 | orchestrator | PING 192.168.112.105 (192.168.112.105) 56(84) bytes of data. 2025-07-12 21:02:59.604410 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=1 ttl=63 time=6.83 ms 2025-07-12 21:03:00.600695 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=2 ttl=63 time=2.91 ms 2025-07-12 21:03:01.601013 | orchestrator | 64 bytes from 192.168.112.105: icmp_seq=3 ttl=63 time=2.00 ms 2025-07-12 21:03:01.601151 | orchestrator | 2025-07-12 21:03:01.601184 | orchestrator | --- 192.168.112.105 ping statistics --- 2025-07-12 21:03:01.601204 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2002ms 2025-07-12 21:03:01.601434 | orchestrator | rtt min/avg/max/mdev = 2.002/3.914/6.834/2.097 ms 2025-07-12 21:03:01.601469 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 21:03:01.601482 | orchestrator | + ping -c3 192.168.112.141 2025-07-12 21:03:01.616723 | orchestrator | PING 192.168.112.141 (192.168.112.141) 56(84) bytes of data. 2025-07-12 21:03:01.616817 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=1 ttl=63 time=10.0 ms 2025-07-12 21:03:02.610864 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=2 ttl=63 time=2.55 ms 2025-07-12 21:03:03.612844 | orchestrator | 64 bytes from 192.168.112.141: icmp_seq=3 ttl=63 time=2.04 ms 2025-07-12 21:03:03.612945 | orchestrator | 2025-07-12 21:03:03.612960 | orchestrator | --- 192.168.112.141 ping statistics --- 2025-07-12 21:03:03.612972 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 21:03:03.612980 | orchestrator | rtt min/avg/max/mdev = 2.036/4.867/10.012/3.644 ms 2025-07-12 21:03:03.612991 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 21:03:03.612998 | orchestrator | + ping -c3 192.168.112.178 2025-07-12 21:03:03.624195 | orchestrator | PING 192.168.112.178 (192.168.112.178) 56(84) bytes of data. 2025-07-12 21:03:03.624327 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=1 ttl=63 time=6.29 ms 2025-07-12 21:03:04.621741 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=2 ttl=63 time=2.46 ms 2025-07-12 21:03:05.623179 | orchestrator | 64 bytes from 192.168.112.178: icmp_seq=3 ttl=63 time=2.10 ms 2025-07-12 21:03:05.623274 | orchestrator | 2025-07-12 21:03:05.623288 | orchestrator | --- 192.168.112.178 ping statistics --- 2025-07-12 21:03:05.623297 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2003ms 2025-07-12 21:03:05.623303 | orchestrator | rtt min/avg/max/mdev = 2.102/3.617/6.286/1.892 ms 2025-07-12 21:03:05.624186 | orchestrator | + for address in $(openstack --os-cloud test floating ip list --status ACTIVE -f value -c "Floating IP Address" | tr -d '\r') 2025-07-12 21:03:05.624201 | orchestrator | + ping -c3 192.168.112.140 2025-07-12 21:03:05.637494 | orchestrator | PING 192.168.112.140 (192.168.112.140) 56(84) bytes of data. 2025-07-12 21:03:05.637577 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=1 ttl=63 time=8.39 ms 2025-07-12 21:03:06.633435 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=2 ttl=63 time=2.56 ms 2025-07-12 21:03:07.635574 | orchestrator | 64 bytes from 192.168.112.140: icmp_seq=3 ttl=63 time=2.12 ms 2025-07-12 21:03:07.635679 | orchestrator | 2025-07-12 21:03:07.635697 | orchestrator | --- 192.168.112.140 ping statistics --- 2025-07-12 21:03:07.635711 | orchestrator | 3 packets transmitted, 3 received, 0% packet loss, time 2004ms 2025-07-12 21:03:07.635723 | orchestrator | rtt min/avg/max/mdev = 2.115/4.357/8.392/2.859 ms 2025-07-12 21:03:07.635735 | orchestrator | + [[ 9.2.0 == \l\a\t\e\s\t ]] 2025-07-12 21:03:07.959912 | orchestrator | ok: Runtime: 0:16:13.627212 2025-07-12 21:03:08.036471 | 2025-07-12 21:03:08.036622 | TASK [Run tempest] 2025-07-12 21:03:08.572374 | orchestrator | skipping: Conditional result was False 2025-07-12 21:03:08.590611 | 2025-07-12 21:03:08.590797 | TASK [Check prometheus alert status] 2025-07-12 21:03:09.130648 | orchestrator | skipping: Conditional result was False 2025-07-12 21:03:09.133953 | 2025-07-12 21:03:09.134143 | PLAY RECAP 2025-07-12 21:03:09.134313 | orchestrator | ok: 24 changed: 11 unreachable: 0 failed: 0 skipped: 5 rescued: 0 ignored: 0 2025-07-12 21:03:09.134389 | 2025-07-12 21:03:09.363883 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-07-12 21:03:09.366578 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-12 21:03:11.106459 | 2025-07-12 21:03:11.106670 | PLAY [Post output play] 2025-07-12 21:03:11.132923 | 2025-07-12 21:03:11.133075 | LOOP [stage-output : Register sources] 2025-07-12 21:03:11.202314 | 2025-07-12 21:03:11.202572 | TASK [stage-output : Check sudo] 2025-07-12 21:03:12.127875 | orchestrator | sudo: a password is required 2025-07-12 21:03:12.243384 | orchestrator | ok: Runtime: 0:00:00.011608 2025-07-12 21:03:12.258308 | 2025-07-12 21:03:12.258537 | LOOP [stage-output : Set source and destination for files and folders] 2025-07-12 21:03:12.301831 | 2025-07-12 21:03:12.302154 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-07-12 21:03:12.370876 | orchestrator | ok 2025-07-12 21:03:12.379028 | 2025-07-12 21:03:12.379172 | LOOP [stage-output : Ensure target folders exist] 2025-07-12 21:03:12.882122 | orchestrator | ok: "docs" 2025-07-12 21:03:12.882659 | 2025-07-12 21:03:13.128164 | orchestrator | ok: "artifacts" 2025-07-12 21:03:13.400729 | orchestrator | ok: "logs" 2025-07-12 21:03:13.420959 | 2025-07-12 21:03:13.421136 | LOOP [stage-output : Copy files and folders to staging folder] 2025-07-12 21:03:13.458923 | 2025-07-12 21:03:13.459143 | TASK [stage-output : Make all log files readable] 2025-07-12 21:03:13.782057 | orchestrator | ok 2025-07-12 21:03:13.789723 | 2025-07-12 21:03:13.789854 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-07-12 21:03:13.835072 | orchestrator | skipping: Conditional result was False 2025-07-12 21:03:13.851626 | 2025-07-12 21:03:13.851823 | TASK [stage-output : Discover log files for compression] 2025-07-12 21:03:13.876899 | orchestrator | skipping: Conditional result was False 2025-07-12 21:03:13.888026 | 2025-07-12 21:03:13.888189 | LOOP [stage-output : Archive everything from logs] 2025-07-12 21:03:13.939281 | 2025-07-12 21:03:13.939471 | PLAY [Post cleanup play] 2025-07-12 21:03:13.948201 | 2025-07-12 21:03:13.948340 | TASK [Set cloud fact (Zuul deployment)] 2025-07-12 21:03:14.016949 | orchestrator | ok 2025-07-12 21:03:14.031838 | 2025-07-12 21:03:14.032042 | TASK [Set cloud fact (local deployment)] 2025-07-12 21:03:14.071457 | orchestrator | skipping: Conditional result was False 2025-07-12 21:03:14.085047 | 2025-07-12 21:03:14.085205 | TASK [Clean the cloud environment] 2025-07-12 21:03:14.675230 | orchestrator | 2025-07-12 21:03:14 - clean up servers 2025-07-12 21:03:15.463620 | orchestrator | 2025-07-12 21:03:15 - testbed-manager 2025-07-12 21:03:15.552399 | orchestrator | 2025-07-12 21:03:15 - testbed-node-1 2025-07-12 21:03:15.638748 | orchestrator | 2025-07-12 21:03:15 - testbed-node-2 2025-07-12 21:03:15.737772 | orchestrator | 2025-07-12 21:03:15 - testbed-node-5 2025-07-12 21:03:15.826709 | orchestrator | 2025-07-12 21:03:15 - testbed-node-3 2025-07-12 21:03:15.915922 | orchestrator | 2025-07-12 21:03:15 - testbed-node-4 2025-07-12 21:03:16.011942 | orchestrator | 2025-07-12 21:03:16 - testbed-node-0 2025-07-12 21:03:16.095403 | orchestrator | 2025-07-12 21:03:16 - clean up keypairs 2025-07-12 21:03:16.114658 | orchestrator | 2025-07-12 21:03:16 - testbed 2025-07-12 21:03:16.139623 | orchestrator | 2025-07-12 21:03:16 - wait for servers to be gone 2025-07-12 21:03:26.996660 | orchestrator | 2025-07-12 21:03:26 - clean up ports 2025-07-12 21:03:27.197496 | orchestrator | 2025-07-12 21:03:27 - 0418679f-3a6c-4fe5-b5ef-ac5b0e343fd3 2025-07-12 21:03:27.435055 | orchestrator | 2025-07-12 21:03:27 - 1409262b-fa78-4c33-b109-3c58f6325140 2025-07-12 21:03:27.729003 | orchestrator | 2025-07-12 21:03:27 - 20ec1b06-347a-4b04-90ad-47a55083f9e6 2025-07-12 21:03:28.049354 | orchestrator | 2025-07-12 21:03:28 - 36e6c650-25a7-4330-8607-960d1f419423 2025-07-12 21:03:28.293566 | orchestrator | 2025-07-12 21:03:28 - 6d9852ef-a7cd-4f9d-8096-9ed4e05dba6e 2025-07-12 21:03:28.535313 | orchestrator | 2025-07-12 21:03:28 - b08fe352-7e48-4c34-8739-49897e35b98f 2025-07-12 21:03:29.009812 | orchestrator | 2025-07-12 21:03:29 - ee2fbf57-f703-47b2-9a96-c3b00685d847 2025-07-12 21:03:29.216577 | orchestrator | 2025-07-12 21:03:29 - clean up volumes 2025-07-12 21:03:29.322430 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-5-node-base 2025-07-12 21:03:29.360627 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-4-node-base 2025-07-12 21:03:29.403869 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-0-node-base 2025-07-12 21:03:29.444130 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-1-node-base 2025-07-12 21:03:29.489728 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-2-node-base 2025-07-12 21:03:29.531512 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-manager-base 2025-07-12 21:03:29.576914 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-3-node-base 2025-07-12 21:03:29.620238 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-3-node-3 2025-07-12 21:03:29.663196 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-0-node-3 2025-07-12 21:03:29.706911 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-8-node-5 2025-07-12 21:03:29.747840 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-7-node-4 2025-07-12 21:03:29.788865 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-5-node-5 2025-07-12 21:03:29.828591 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-1-node-4 2025-07-12 21:03:29.873277 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-6-node-3 2025-07-12 21:03:29.922873 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-4-node-4 2025-07-12 21:03:29.964426 | orchestrator | 2025-07-12 21:03:29 - testbed-volume-2-node-5 2025-07-12 21:03:30.012866 | orchestrator | 2025-07-12 21:03:30 - disconnect routers 2025-07-12 21:03:30.142073 | orchestrator | 2025-07-12 21:03:30 - testbed 2025-07-12 21:03:31.071663 | orchestrator | 2025-07-12 21:03:31 - clean up subnets 2025-07-12 21:03:31.120587 | orchestrator | 2025-07-12 21:03:31 - subnet-testbed-management 2025-07-12 21:03:31.296987 | orchestrator | 2025-07-12 21:03:31 - clean up networks 2025-07-12 21:03:31.460960 | orchestrator | 2025-07-12 21:03:31 - net-testbed-management 2025-07-12 21:03:31.769767 | orchestrator | 2025-07-12 21:03:31 - clean up security groups 2025-07-12 21:03:31.809508 | orchestrator | 2025-07-12 21:03:31 - testbed-management 2025-07-12 21:03:31.925324 | orchestrator | 2025-07-12 21:03:31 - testbed-node 2025-07-12 21:03:32.036954 | orchestrator | 2025-07-12 21:03:32 - clean up floating ips 2025-07-12 21:03:32.079028 | orchestrator | 2025-07-12 21:03:32 - 81.163.193.109 2025-07-12 21:03:32.445407 | orchestrator | 2025-07-12 21:03:32 - clean up routers 2025-07-12 21:03:32.515200 | orchestrator | 2025-07-12 21:03:32 - testbed 2025-07-12 21:03:34.142943 | orchestrator | ok: Runtime: 0:00:19.455574 2025-07-12 21:03:34.147215 | 2025-07-12 21:03:34.147411 | PLAY RECAP 2025-07-12 21:03:34.147531 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-07-12 21:03:34.147591 | 2025-07-12 21:03:34.286407 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-07-12 21:03:34.289112 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-12 21:03:35.022239 | 2025-07-12 21:03:35.022428 | PLAY [Cleanup play] 2025-07-12 21:03:35.052517 | 2025-07-12 21:03:35.052739 | TASK [Set cloud fact (Zuul deployment)] 2025-07-12 21:03:35.099884 | orchestrator | ok 2025-07-12 21:03:35.109553 | 2025-07-12 21:03:35.109723 | TASK [Set cloud fact (local deployment)] 2025-07-12 21:03:35.144715 | orchestrator | skipping: Conditional result was False 2025-07-12 21:03:35.162060 | 2025-07-12 21:03:35.162219 | TASK [Clean the cloud environment] 2025-07-12 21:03:36.364640 | orchestrator | 2025-07-12 21:03:36 - clean up servers 2025-07-12 21:03:36.834765 | orchestrator | 2025-07-12 21:03:36 - clean up keypairs 2025-07-12 21:03:36.852127 | orchestrator | 2025-07-12 21:03:36 - wait for servers to be gone 2025-07-12 21:03:36.895324 | orchestrator | 2025-07-12 21:03:36 - clean up ports 2025-07-12 21:03:36.975812 | orchestrator | 2025-07-12 21:03:36 - clean up volumes 2025-07-12 21:03:37.049306 | orchestrator | 2025-07-12 21:03:37 - disconnect routers 2025-07-12 21:03:37.074636 | orchestrator | 2025-07-12 21:03:37 - clean up subnets 2025-07-12 21:03:37.096064 | orchestrator | 2025-07-12 21:03:37 - clean up networks 2025-07-12 21:03:37.232433 | orchestrator | 2025-07-12 21:03:37 - clean up security groups 2025-07-12 21:03:37.266914 | orchestrator | 2025-07-12 21:03:37 - clean up floating ips 2025-07-12 21:03:37.289677 | orchestrator | 2025-07-12 21:03:37 - clean up routers 2025-07-12 21:03:37.700608 | orchestrator | ok: Runtime: 0:00:01.349834 2025-07-12 21:03:37.704378 | 2025-07-12 21:03:37.704531 | PLAY RECAP 2025-07-12 21:03:37.704646 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-12 21:03:37.704707 | 2025-07-12 21:03:37.842727 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-07-12 21:03:37.844875 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-12 21:03:38.753533 | 2025-07-12 21:03:38.753710 | PLAY [Base post-fetch] 2025-07-12 21:03:38.770329 | 2025-07-12 21:03:38.770549 | TASK [fetch-output : Set log path for multiple nodes] 2025-07-12 21:03:38.826173 | orchestrator | skipping: Conditional result was False 2025-07-12 21:03:38.840609 | 2025-07-12 21:03:38.840841 | TASK [fetch-output : Set log path for single node] 2025-07-12 21:03:38.884965 | orchestrator | ok 2025-07-12 21:03:38.891478 | 2025-07-12 21:03:38.891623 | LOOP [fetch-output : Ensure local output dirs] 2025-07-12 21:03:39.407704 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/2fd9ca158f1c4f53bff4bdb765da3c0a/work/logs" 2025-07-12 21:03:39.672497 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2fd9ca158f1c4f53bff4bdb765da3c0a/work/artifacts" 2025-07-12 21:03:39.965460 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/2fd9ca158f1c4f53bff4bdb765da3c0a/work/docs" 2025-07-12 21:03:40.004614 | 2025-07-12 21:03:40.004902 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-07-12 21:03:40.981004 | orchestrator | changed: .d..t...... ./ 2025-07-12 21:03:40.981480 | orchestrator | changed: All items complete 2025-07-12 21:03:40.981569 | 2025-07-12 21:03:41.749844 | orchestrator | changed: .d..t...... ./ 2025-07-12 21:03:42.498242 | orchestrator | changed: .d..t...... ./ 2025-07-12 21:03:42.522984 | 2025-07-12 21:03:42.523171 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-07-12 21:03:43.007771 | orchestrator -> localhost | ok: Item: artifacts Runtime: 0:00:00.011415 2025-07-12 21:03:43.283812 | orchestrator -> localhost | ok: Item: docs Runtime: 0:00:00.009511 2025-07-12 21:03:43.297044 | 2025-07-12 21:03:43.297138 | PLAY RECAP 2025-07-12 21:03:43.297203 | orchestrator | ok: 4 changed: 3 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-07-12 21:03:43.297236 | 2025-07-12 21:03:43.394315 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-07-12 21:03:43.395238 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-12 21:03:44.071488 | 2025-07-12 21:03:44.071615 | PLAY [Base post] 2025-07-12 21:03:44.084496 | 2025-07-12 21:03:44.084611 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-07-12 21:03:45.034890 | orchestrator | changed 2025-07-12 21:03:45.047400 | 2025-07-12 21:03:45.047549 | PLAY RECAP 2025-07-12 21:03:45.047632 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-07-12 21:03:45.047719 | 2025-07-12 21:03:45.135950 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-07-12 21:03:45.138116 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-07-12 21:03:45.914126 | 2025-07-12 21:03:45.914256 | PLAY [Base post-logs] 2025-07-12 21:03:45.923468 | 2025-07-12 21:03:45.923571 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-07-12 21:03:46.328960 | localhost | changed 2025-07-12 21:03:46.346806 | 2025-07-12 21:03:46.346988 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-07-12 21:03:46.381284 | localhost | ok 2025-07-12 21:03:46.384105 | 2025-07-12 21:03:46.384187 | TASK [Set zuul-log-path fact] 2025-07-12 21:03:46.400538 | localhost | ok 2025-07-12 21:03:46.418004 | 2025-07-12 21:03:46.418179 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-07-12 21:03:46.444029 | localhost | ok 2025-07-12 21:03:46.447046 | 2025-07-12 21:03:46.447152 | TASK [upload-logs : Create log directories] 2025-07-12 21:03:46.905739 | localhost | changed 2025-07-12 21:03:46.910196 | 2025-07-12 21:03:46.910330 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-07-12 21:03:47.373368 | localhost -> localhost | ok: Runtime: 0:00:00.005992 2025-07-12 21:03:47.383063 | 2025-07-12 21:03:47.383520 | TASK [upload-logs : Upload logs to log server] 2025-07-12 21:03:47.943969 | localhost | Output suppressed because no_log was given 2025-07-12 21:03:47.949480 | 2025-07-12 21:03:47.949711 | LOOP [upload-logs : Compress console log and json output] 2025-07-12 21:03:48.002713 | localhost | skipping: Conditional result was False 2025-07-12 21:03:48.007212 | localhost | skipping: Conditional result was False 2025-07-12 21:03:48.023931 | 2025-07-12 21:03:48.024134 | LOOP [upload-logs : Upload compressed console log and json output] 2025-07-12 21:03:48.067112 | localhost | skipping: Conditional result was False 2025-07-12 21:03:48.067682 | 2025-07-12 21:03:48.070774 | localhost | skipping: Conditional result was False 2025-07-12 21:03:48.088093 | 2025-07-12 21:03:48.088453 | LOOP [upload-logs : Upload console log and json output]